Sample records for web service protocols

  1. SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services

    USDA-ARS?s Scientific Manuscript database

    SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...

  2. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  3. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs. PMID:19775460

  4. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    PubMed

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.

  5. Proof of Concept Integration of a Single-Level Service-Oriented Architecture into a Multi-Domain Secure Environment

    DTIC Science & Technology

    2008-03-01

    Machine [29]. OC4J applications support Java Servlets , Web services, and the following J2EE specific standards: Extensible Markup Language (XML...IMAP Internet Message Access Protocol IP Internet Protocol IT Information Technology xviii J2EE Java Enterprise Environment JSR 168 Java ...LDAP), World Wide Web Distributed Authoring and Versioning (WebDav), Java Specification Request 168 (JSR 168), and Web Services for Remote

  6. Enterprise Considerations for Ports and Protocols

    DTIC Science & Technology

    2016-10-21

    selected communications. These protocols are restricted to specific ports or addresses in the receiving web service. HTTPS is familiarly restricted...in use by the web services and applications that are connected to the network are required for interoperability and security. Policies specify the...network or reside at the end-points (i.e., web services or clients). ____________________________ Manuscript received June 1, 2016; revised July

  7. J-Plus Web Portal

    NASA Astrophysics Data System (ADS)

    Civera Lorenzo, Tamara

    2017-10-01

    Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol

  8. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  9. If SWORD Is the Answer, What Is the Question?: Use of the Simple Web-Service Offering Repository Deposit Protocol

    ERIC Educational Resources Information Center

    Lewis, Stuart; Hayes, Leonie; Newton-Wade, Vanessa; Corfield, Antony; Davis, Richard; Donohue, Tim; Wilson, Scott

    2009-01-01

    Purpose: The purpose of this paper is to describe the repository deposit protocol, Simple Web-service Offering Repository Deposit (SWORD), its development iteration, and some of its potential use cases. In addition, seven case studies of institutional use of SWORD are provided. Design/methodology/approach: The paper describes the recent…

  10. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  11. Design and implementation of CUAHSI WaterML and WaterOneFlow Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Zaslavsky, I.; Whitenack, T.; Maidment, D.

    2007-12-01

    WaterOneFlow is a term for a group of web services created by and for the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) community. CUAHSI web services facilitate the retrieval of hydrologic observations information from online data sources using the SOAP protocol. CUAHSI Water Markup Language (below referred to as WaterML) is an XML schema defining the format of messages returned by the WaterOneFlow web services. \

  12. Exploring NASA GES DISC Data with Interoperable Services

    NASA Technical Reports Server (NTRS)

    Zhao, Peisheng; Yang, Wenli; Hegde, Mahabal; Wei, Jennifer C.; Kempler, Steven; Pham, Long; Teng, William; Savtchenko, Andrey

    2015-01-01

    Overview of NASA GES DISC (NASA Goddard Earth Science Data and Information Services Center) data with interoperable services: Open-standard and Interoperable Services Improve data discoverability, accessibility, and usability with metadata, catalogue and portal standards Achieve data, information and knowledge sharing across applications with standardized interfaces and protocols Open Geospatial Consortium (OGC) Data Services and Specifications Web Coverage Service (WCS) -- data Web Map Service (WMS) -- pictures of data Web Map Tile Service (WMTS) --- pictures of data tiles Styled Layer Descriptors (SLD) --- rendered styles.

  13. Using EMBL-EBI services via Web interface and programmatically via Web Services

    PubMed Central

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2015-01-01

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941

  14. Seahawk: moving beyond HTML in Web-based bioinformatics analysis.

    PubMed

    Gordon, Paul M K; Sensen, Christoph W

    2007-06-18

    Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.

  15. Seahawk: moving beyond HTML in Web-based bioinformatics analysis

    PubMed Central

    Gordon, Paul MK; Sensen, Christoph W

    2007-01-01

    Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405

  16. WebGLORE: a web service for Grid LOgistic REgression.

    PubMed

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-12-15

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.

  17. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    PubMed

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  18. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  19. BioSWR – Semantic Web Services Registry for Bioinformatics

    PubMed Central

    Repchevsky, Dmitry; Gelpi, Josep Ll.

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118

  20. BioSWR--semantic web services registry for bioinformatics.

    PubMed

    Repchevsky, Dmitry; Gelpi, Josep Ll

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  1. Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.

    PubMed

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2014-12-12

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.

  2. XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services

    PubMed Central

    Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl ES

    2009-01-01

    Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics. PMID:19732427

  3. XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services.

    PubMed

    Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl E S

    2009-09-04

    Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.

  4. WebGLORE: a Web service for Grid LOgistic REgression

    PubMed Central

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. Availability and implementation: http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation. Contact: x1jiang@ucsd.edu PMID:24072732

  5. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    NASA Astrophysics Data System (ADS)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  6. A SOAP Web Service for accessing MODIS land product subsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SanthanaVannan, Suresh K; Cook, Robert B; Pan, Jerry Yun

    2011-01-01

    Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. Tomore » overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.« less

  7. SoyBase Simple Semantic Web Architecture and Protocol (SSWAP) Services

    USDA-ARS?s Scientific Manuscript database

    Semantic web technologies offer the potential to link internet resources and data by shared concepts without having to rely on absolute lexical matches. Thus two web sites or web resources which are concerned with similar data types could be identified based on similar semantics. In the biological...

  8. 77 FR 21973 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... location: Delete entry and replace with ``Amazon Web Services, LLC 13461 Sunrise Valley Drive, Herndon, VA.../JS Privacy Office, Freedom of Information Directorate, Washington Headquarters Services, 1155 Defense..., protocols and/or in briefings of the consequences of improper access or use of the data. The web-based files...

  9. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  10. A Case Study in Web 2.0 Application Development

    NASA Astrophysics Data System (ADS)

    Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.

    2010-12-01

    Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.

  11. SOAP based web services and their future role in VO projects

    NASA Astrophysics Data System (ADS)

    Topf, F.; Jacquey, C.; Génot, V.; Cecconi, B.; André, N.; Zhang, T. L.; Kallio, E.; Lammer, H.; Facsko, G.; Stöckler, R.; Khodachenko, M.

    2011-10-01

    Modern state-of-the-art web services are from crucial importance for the interoperability of different VO tools existing in the planetary community. SOAP based web services assure the interconnectability between different data sources and tools by providing a common protocol for communication. This paper will point out a best practice approach with the Automated Multi-Dataset Analysis Tool (AMDA) developed by CDPP, Toulouse and the provision of VEX/MAG data from a remote database located at IWF, Graz. Furthermore a new FP7 project IMPEx will be introduced with a potential usage example of AMDA web services in conjunction with simulation models.

  12. Migrating an Online Service to WAP - A Case Study.

    ERIC Educational Resources Information Center

    Klasen, Lars

    2002-01-01

    Discusses mobile access via wireless application protocol (WAP) to online services that is offered in Sweden through InfoTorg. Topics include the Swedish online market; filtering HTML data from an Internet/Web server into WML (wireless markup language); mobile phone technology; microbrowsers; WAP protocol; and future possibilities. (LRW)

  13. Modern Technologies aspects for Oceanographic Data Management and Dissemination : The HNODC Implementation

    NASA Astrophysics Data System (ADS)

    Lykiardopoulos, A.; Iona, A.; Lakes, V.; Batis, A.; Balopoulos, E.

    2009-04-01

    The development of new technologies for the aim of enhancing Web Applications with Dynamically data access was the starting point for Geospatial Web Applications to developed at the same time as well. By the means of these technologies the Web Applications embed the capability of presenting Geographical representations of the Geo Information. The induction in nowadays, of the state of the art technologies known as Web Services, enforce the Web Applications to have interoperability among them i.e. to be able to process requests from each other via a network. In particular throughout the Oceanographic Community, modern Geographical Information systems based on Geospatial Web Services are now developed or will be developed shortly in the near future, with capabilities of managing the information itself fully through Web Based Geographical Interfaces. The exploitation of HNODC Data Base, through a Web Based Application enhanced with Web Services by the use of open source tolls may be consider as an ideal case of such implementation. Hellenic National Oceanographic Data Center (HNODC) as a National Public Oceanographic Data provider and at the same time a member of the International Net of Oceanographic Data Centers( IOC/IODE), owns a very big volume of Data and Relevant information about the Marine Ecosystem. For the efficient management and exploitation of these Data, a relational Data Base has been constructed with a storage of over 300.000 station data concerning, physical, chemical and biological Oceanographic information. The development of a modern Web Application for the End User worldwide to be able to explore and navigate throughout HNODC data via the use of an interface with the capability of presenting Geographical representations of the Geo Information, is today a fact. The application is constituted with State of the art software components and tools such as: • Geospatial and no Spatial Web Services mechanisms • Geospatial open source tools for the creation of Dynamic Geographical Representations. • Communication protocols (messaging mechanisms) in all Layers such as XML and GML together with SOAP protocol via Apache/Axis. At the same time the application may interact with any other SOA application either in sending or receiving Geospatial Data through Geographical Layers, since it inherits the big advantage of interoperability between Web Services systems. Roughly the Architecture can denoted as follows: • At the back End Open source PostgreSQL DBMS stands as the data storage mechanism with more than one Data Base Schemas cause of the separation of the Geospatial Data and the non Geospatial Data. • UMN Map Server and Geoserver are the mechanisms for: Represent Geospatial Data via Web Map Service (WMS) Querying and Navigating in Geospatial and Meta Data Information via Web Feature Service (WFS) oAnd in the near future Transacting and processing new or existing Geospatial Data via Web Processing Service (WPS) • Map Bender, a geospatial portal site management software for OGC and OWS architectures acts as the integration module between the Geospatial Mechanisms. Mapbender comes with an embedded data model capable to manage interfaces for displaying, navigating and querying OGC compliant web map and feature services (WMS and transactional WFS). • Apache and Tomcat stand again as the Web Service middle Layers • Apache Axis with it's embedded implementation of the SOAP protocol ("Simple Object Access Protocol") acts as the No spatial data Mechanism of Web Services. These modules of the platform are still under development but their implementation will be fulfilled in the near future. • And a new Web user Interface for the end user based on enhanced and customized version of a MapBender GUI, a powerful Web Services client. For HNODC the interoperability of Web Services is the big advantage of the developed platform since it is capable to act in the future as provider and consumer of Web Services in both ways: • Either as data products provider for external SOA platforms. • Or as consumer of data products from external SOA platforms for new applications to be developed or for existing applications to be enhanced. A great paradigm of Data Managenet integration and dissemination via the use of such technologies is the European's Union Research Project Seadatanet, with the main objective to develop a standardized distributed system for managing and disseminating the large and diverse data sets and to enhance the currently existing infrastructures with Web Services Further more and when the technology of Web Processing Service (WPS), will be mature enough and applicable for development, the derived data products will be able to have any kind of GIS functionality for consumers across the network. From this point of view HNODC, joins the global scientific community by providing and consuming application Independent data products.

  14. Secure password-based authenticated key exchange for web services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Fang; Meder, Samuel; Chevassut, Olivier

    This paper discusses an implementation of an authenticated key-exchange method rendered on message primitives defined in the WS-Trust and WS-SecureConversation specifications. This IEEE-specified cryptographic method (AuthA) is proven-secure for password-based authentication and key exchange, while the WS-Trust and WS-Secure Conversation are emerging Web Services Security specifications that extend the WS-Security specification. A prototype of the presented protocol is integrated in the WSRF-compliant Globus Toolkit V4. Further hardening of the implementation is expected to result in a version that will be shipped with future Globus Toolkit releases. This could help to address the current unavailability of decent shared-secret-based authentication options inmore » the Web Services and Grid world. Future work will be to integrate One-Time-Password (OTP) features in the authentication protocol.« less

  15. Optimizing the Information Presentation on Mining Potential by using Web Services Technology with Restful Protocol

    NASA Astrophysics Data System (ADS)

    Abdillah, T.; Dai, R.; Setiawan, E.

    2018-02-01

    This study aims to develop the application of Web Services technology with RestFul Protocol to optimize the information presentation on mining potential. This study used User Interface Design approach for the information accuracy and relevance as well as the Web Service for the reliability in presenting the information. The results show that: the information accuracy and relevance regarding mining potential can be seen from the achievement of User Interface implementation in the application that is based on the following rules: The consideration of the appropriate colours and objects, the easiness of using the navigation, and users’ interaction with the applications that employs symbols and languages understood by the users; the information accuracy and relevance related to mining potential can be observed by the information presented by using charts and Tool Tip Text to help the users understand the provided chart/figure; the reliability of the information presentation is evident by the results of Web Services testing in Figure 4.5.6. This study finds out that User Interface Design and Web Services approaches (for the access of different Platform apps) are able to optimize the presentation. The results of this study can be used as a reference for software developers and Provincial Government of Gorontalo.

  16. Fingerprinting Software Defined Networks and Controllers

    DTIC Science & Technology

    2015-03-01

    24 2.5.3 Intrusion Prevention System with SDN . . . . . . . . . . . . . . . 25 2.5.4 Modular Security Services...Control Message Protocol IDS Intrusion Detection System IPS Intrusion Prevention System ISP Internet Service Provider LLDP Link Layer Discovery Protocol...layer functions (e.g., web proxies, firewalls, intrusion detection/prevention, load balancers, etc.). The increase in switch capabilities combined

  17. 78 FR 55696 - Request for Comment on Petition Filed by Purple Communications, Inc. Regarding the Provision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Commission's VRS Reform Order regarding the use of Internet Protocol Captioned Telephone Service (IP CTS... requested clarification would force Purple and other IP CTS providers to cease the provision of IP CTS using... of Internet Protocol Captioned Telephone Service (IP CTS) through Web or wireless technologies...

  18. Dynamic selection mechanism for quality of service aware web services

    NASA Astrophysics Data System (ADS)

    D'Mello, Demian Antony; Ananthanarayana, V. S.

    2010-02-01

    A web service is an interface of the software component that can be accessed by standard Internet protocols. The web service technology enables an application to application communication and interoperability. The increasing number of web service providers throughout the globe have produced numerous web services providing the same or similar functionality. This necessitates the use of tools and techniques to search the suitable services available over the Web. UDDI (universal description, discovery and integration) is the first initiative to find the suitable web services based on the requester's functional demands. However, the requester's requirements may also include non-functional aspects like quality of service (QoS). In this paper, the authors define a QoS model for QoS aware and business driven web service publishing and selection. The authors propose a QoS requirement format for the requesters, to specify their complex demands on QoS for the web service selection. The authors define a tree structure called quality constraint tree (QCT) to represent the requester's variety of requirements on QoS properties having varied preferences. The paper proposes a QoS broker based architecture for web service selection, which facilitates the requesters to specify their QoS requirements to select qualitatively optimal web service. A web service selection algorithm is presented, which ranks the functionally similar web services based on the degree of satisfaction of the requester's QoS requirements and preferences. The paper defines web service provider qualities to distinguish qualitatively competitive web services. The paper also presents the modelling and selection mechanism for the requester's alternative constraints defined on the QoS. The authors implement the QoS broker based system to prove the correctness of the proposed web service selection mechanism.

  19. Implementations of Sensor Webs Utilizing Uninhabited Aerial Systems

    NASA Technical Reports Server (NTRS)

    Sullivan, Donald V.

    2009-01-01

    In this paper we describe the web services, processes, communication protocols and ad-hoc service chains utilized in the late summer and early fall 2007 Ikhana UAS response to the wildfires burning in southern California. Additionally, we describe the lessons learned that will be applied to the upcoming Global Hawk UAS Aura Satellite Validation Experiment planned for early 2009.

  20. Ship to Shore Data Communication and Prioritization

    DTIC Science & Technology

    2011-12-01

    First Out FTP File Transfer Protocol GCCS-M Global Command and Control System Maritime HAIPE High Assurance Internet Protocol Encryptor HTTP Hypertext...Transfer Protocol (world wide web protocol ) IBS Integrated Bar Code System IDEF0 Integration Definition IER Information Exchange Requirements...INTEL Intelligence IP Internet Protocol IPT Integrated Product Team ISEA In-Service Engineering Agent ISNS Integrated Shipboard Network System IT

  1. The 10 Hottest Technologies in Telecom.

    ERIC Educational Resources Information Center

    Flanagan, Patrick

    1997-01-01

    Presents the fourth annual listing of the 10 "hottest" telecommunications technologies. Describes Web broadcasting, remote-access servers, extranets, Internet telephony, enterprise network directory services, Web site management tools, IP (Internet Protocols) switching, wavelength division multiplexing, digital subscriber lines, and…

  2. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  3. Transforming War Fighting through the Use of Service Based Architecture (SBA) Technology

    DTIC Science & Technology

    2006-05-04

    near-real-time video & telemetry to users on network using standard web-based protocols – Provides web-based access to archived video files MTI...Target Tracks Service Capabilities – Disseminates near-real-time MTI and Target Tracks to users on network based on consumer specified geographic...filter IBS SIGINT Service Capabilities – Disseminates near-real-time IBS SIGINT data to users on network based on consumer specified geographic filter

  4. jORCA: easily integrating bioinformatics Web Services.

    PubMed

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  5. ReSTful OSGi Web Applications Tutorial

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja; Norris, Jeff

    2008-01-01

    This slide presentation accompanies a tutorial on the ReSTful (Representational State Transfer) web application. Using Open Services Gateway Initiative (OSGi), ReST uses HTTP protocol to enable developers to offer services to a diverse variety of clients: from shell scripts to sophisticated Java application suites. It also uses Eclipse for the rapid development, the Eclipse debugger, the test application, and the ease of export to production servers.

  6. MsLDR-creator: a web service to design msLDR assays.

    PubMed

    Bormann, Felix; Dahl, Andreas; Sers, Christine

    2012-03-01

    MsLDR-creator is a free web service to design assays for the new DNA methylation detection method msLDR. The service provides the user with all necessary information about the oligonucleotides required for the measurement of a given CpG within a sequence of interest. The parameters are calculated by the nearest neighbour approach to achieve optimal behaviour during the experimental procedure. In addition, to guarantee a good start using msLDR, further information, like protocols and hints and tricks, are provided.

  7. Presence Management and Merging Presence Information for NGN Services

    NASA Astrophysics Data System (ADS)

    Schumann, Sebastian; Mikoczy, Eugen; Podhradsky, Pavol; Muruchi, Feliciano; Maruschke, Michael

    This paper describes an approach for interworking scenarios between Session Initiation Protocol (SIP) based and non SIP based frameworks (e.g. web services) in case of the presence management service. The characteristics of the concept of a centralized presence management will be introduced.

  8. Co-creating and Evaluating a Web-app Mapping Real-World Health Care Services for Students: The servi-Share Protocol.

    PubMed

    Montagni, Ilaria; Langlois, Emmanuel; Wittwer, Jérôme; Tzourio, Christophe

    2017-02-16

    University students aged 18-30 years are a population group reporting low access to health care services, with high rates of avoidance and delay of medical care. This group also reports not having appropriate information about available health care services. However, university students are at risk for several health problems, and regular medical consultations are recommended in this period of life. New digital devices are popular among the young, and Web-apps can be used to facilitate easy access to information regarding health care services. A small number of electronic health (eHealth) tools have been developed with the purpose of displaying real-world health care services, and little is known about how such eHealth tools can improve access to care. This paper describes the processes of co-creating and evaluating the beta version of a Web-app aimed at mapping and describing free or low-cost real-world health care services available in the Bordeaux area of France, which is specifically targeted to university students. The co-creation process involves: (1) exploring the needs of students to know and access real-world health care services; (2) identifying the real-world health care services of interest for students; and (3) deciding on a user interface, and developing the beta version of the Web-app. Finally, the evaluation process involves: (1) testing the beta version of the Web-app with the target audience (university students aged 18-30 years); (2) collecting their feedback via a satisfaction survey; and (3) planning a long-term evaluation. The co-creation process of the beta version of the Web-app was completed in August 2016 and is described in this paper. The evaluation process started on September 7, 2016. The project was completed in December 2016 and implementation of the Web-app is ongoing. Web-apps are an innovative way to increase the health literacy of young people in terms of delivery of and access to health care. The creation of Web-apps benefits from the involvement of stakeholders (eg, students and health care providers) to correctly identify the real-world health care services to be displayed. ©Ilaria Montagni, Emmanuel Langlois, Jérôme Wittwer, Christophe Tzourio. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 16.02.2017.

  9. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.

    PubMed

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-08-31

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.

  10. Analyzing multiple data sets by interconnecting RSAT programs via SOAP Web services: an example with ChIP-chip data.

    PubMed

    Sand, Olivier; Thomas-Chollier, Morgane; Vervisch, Eric; van Helden, Jacques

    2008-01-01

    This protocol shows how to access the Regulatory Sequence Analysis Tools (RSAT) via a programmatic interface in order to automate the analysis of multiple data sets. We describe the steps for writing a Perl client that connects to the RSAT Web services and implements a workflow to discover putative cis-acting elements in promoters of gene clusters. In the presented example, we apply this workflow to lists of transcription factor target genes resulting from ChIP-chip experiments. For each factor, the protocol predicts the binding motifs by detecting significantly overrepresented hexanucleotides in the target promoters and generates a feature map that displays the positions of putative binding sites along the promoter sequences. This protocol is addressed to bioinformaticians and biologists with programming skills (notions of Perl). Running time is approximately 6 min on the example data set.

  11. Group Membership Based Authorization to CADC Resources

    NASA Astrophysics Data System (ADS)

    Damian, A.; Dowler, P.; Gaudet, S.; Hill, N.

    2012-09-01

    The Group Membership Service (GMS), implemented at the Canadian Astronomy Data Centre (CADC), is a prototype of what could eventually be an IVOA standard for a distributed and interoperable group membership protocol. Group membership is the core authorization concept that enables teamwork and collaboration amongst astronomers accessing distributed resources and services. The service integrates and complements other access control related IVOA standards such as single-sign-on (SSO) using X.509 proxy certificates and the Credential Delegation Protocol (CDP). The GMS has been used at CADC for several years now, initially as a subsystem and then as a stand-alone Web service. It is part of the authorization mechanism for controlling the access to restricted Web resources as well as the VOSpace service hosted by the CADC. We present the role that GMS plays within the access control system at the CADC, including the functionality of the service and how the different CADC services make use of it to assert user authorization to resources. We also describe the main advantages and challenges of using the service as well as future work to increase its robustness and functionality.

  12. Implementation and Analysis of Real-Time Streaming Protocols.

    PubMed

    Santos-González, Iván; Rivero-García, Alexandra; Molina-Gil, Jezabel; Caballero-Gil, Pino

    2017-04-12

    Communication media have become the primary way of interaction thanks to the discovery and innovation of many new technologies. One of the most widely used communication systems today is video streaming, which is constantly evolving. Such communications are a good alternative to face-to-face meetings, and are therefore very useful for coping with many problems caused by distance. However, they suffer from different issues such as bandwidth limitation, network congestion, energy efficiency, cost, reliability and connectivity. Hence, the quality of service and the quality of experience are considered the two most important issues for this type of communication. This work presents a complete comparative study of two of the most used protocols of video streaming, Real Time Streaming Protocol (RTSP) and the Web Real-Time Communication (WebRTC). In addition, this paper proposes two new mobile applications that implement those protocols in Android whose objective is to know how they are influenced by the aspects that most affect the streaming quality of service, which are the connection establishment time and the stream reception time. The new video streaming applications are also compared with the most popular video streaming applications for Android, and the experimental results of the analysis show that the developed WebRTC implementation improves the performance of the most popular video streaming applications with respect to the stream packet delay.

  13. Browsing the World Wide Web from behind a firewall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, R.W.

    1995-02-01

    The World Wide Web provides a unified method of access to various information services on the Internet via a variety of protocols. Mosaic and other browsers give users a graphical interface to the Web that is easier to use and more visually pleasing than any other common Internet information service today. The availability of information via the Web and the number of users accessing it have both grown rapidly in the last year. The interest and investment of commercial firms in this technology suggest that in the near future, access to the Web may become as necessary to doing businessmore » as a telephone. This is problematical for organizations that use firewalls to protect their internal networks from the Internet. Allowing all the protocols and types of information found in the Web to pass their firewall will certainly increase the risk of attack by hackers on the Internet. But not allowing access to the Web could be even more dangerous, as frustrated users of the internal network are either unable to do their jobs, or find creative new ways to get around the firewall. The solution to this dilemma adopted at Sandia National Laboratories is described. Discussion also covers risks of accessing the Web, design alternatives considered, and trade-offs used to find the proper balance between access and protection.« less

  14. Experiences with http/WebDAV protocols for data access in high throughput computing

    NASA Astrophysics Data System (ADS)

    Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier

    2011-12-01

    In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.

  15. Development and Integration of WWW-Based Services in an Existing University Environment.

    ERIC Educational Resources Information Center

    Garofalakis, John; Kappos, Panagiotis; Tsakalidis, Athanasios; Tsaknakis, John; Tzimas, Giannis; Vassiliadis, Vassilios

    This paper describes the experience and the problems solved in the process of developing and integrating advanced World Wide Web-based services into the University of Patras (Greece) system. In addition to basic network services (e.g., e-mail, file transfer protocol), the final system will integrate the following set of advanced services: a…

  16. Providing web-based mental health services to at-risk women

    PubMed Central

    2011-01-01

    Background We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Methods Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Results Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative evaluations did not show significant improvements on measures, although the study was not powered to detect these. Conclusions We demonstrated that an evidence-based group intervention program for lone mothers developed and evaluated in face-to-face context transferred well to an online video conferencing format both in terms of group process and outcomes. PMID:21854563

  17. Providing web-based mental health services to at-risk women.

    PubMed

    Lipman, Ellen L; Kenny, Meghan; Marziali, Elsa

    2011-08-19

    We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative evaluations did not show significant improvements on measures, although the study was not powered to detect these. We demonstrated that an evidence-based group intervention program for lone mothers developed and evaluated in face-to-face context transferred well to an online video conferencing format both in terms of group process and outcomes.

  18. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  19. Cool Apps: Building Cryospheric Data Applications with Standards-Based Service Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Oldenburg, J.; Truslove, I.; Collins, J. A.; Liu, M.; Lewis, S.; Brodzik, M.

    2012-12-01

    The National Snow and Ice Data Center (NSIDC) holds a large collection of cryospheric data, and is involved in a number of informatics research and development projects aimed at improving the discoverability and accessibility of these data. To develop high- quality software in a timely manner, we have adopted a Service- Oriented Architecture (SOA) approach for our core technical infrastructure development. Data services at NSIDC are internally exposed to other tools and applications through standards-based service interfaces. These standards include OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting), various OGC (Open Geospatial Consortium) standards including WMS (Web Map Service) and WFS (Web Feature Service), ESIP (Federation of Earth Sciences Information Partners) OpenSearch, and NSIDC-defined service endpoints which follow a RESTful architecture. By taking a standards-based approach, we are able to use off-the-shelf tools and libraries to consume, translate and broker these data services, and thus develop applications faster. Additionally, by exposing public interfaces to these services we provide valuable data services to technical collaborators; for example, NASA Reverb (http://reverb.echo.nasa.gov) uses NSIDC's WMS services. Our latest generation of web applications consume these data services directly. The most complete example of this is the Operation IceBridge Data Portal (http://nsidc.org/icebridge/ portal) which depends on many of the aforementioned services, retrieving data in several ways. The maps it displays are obtained through the use of WMS and WFS protocols from a MapServer instance hosted at NSIDC. Links to the scientific data collected on Operation IceBridge campaigns are obtained through ESIP OpenSearch requests service providers that encapsulate our metadata databases. These standards-based web services are also developed at NSIDC and are designed to be used independently of the Portal. This poster provides a visual representation of the relationships described above, with additional details and examples, and more generally outlines the benefits and challenges of this SOA approach.

  20. New Generation Sensor Web Enablement

    PubMed Central

    Bröring, Arne; Echterhoff, Johannes; Jirka, Simon; Simonis, Ingo; Everding, Thomas; Stasch, Christoph; Liang, Steve; Lemmens, Rob

    2011-01-01

    Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. PMID:22163760

  1. Improving Internet Archive Service through Proxy Cache.

    ERIC Educational Resources Information Center

    Yu, Hsiang-Fu; Chen, Yi-Ming; Wang, Shih-Yong; Tseng, Li-Ming

    2003-01-01

    Discusses file transfer protocol (FTP) servers for downloading archives (files with particular file extensions), and the change to HTTP (Hypertext transfer protocol) with increased Web use. Topics include the Archie server; proxy cache servers; and how to improve the hit rate of archives by a combination of caching and better searching mechanisms.…

  2. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability

    PubMed Central

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-01-01

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759

  3. The Osseus platform: a prototype for advanced web-based distributed simulation

    NASA Astrophysics Data System (ADS)

    Franceschini, Derrick; Riecken, Mark

    2016-05-01

    Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.

  4. Implementation and Analysis of Real-Time Streaming Protocols

    PubMed Central

    Santos-González, Iván; Rivero-García, Alexandra; Molina-Gil, Jezabel; Caballero-Gil, Pino

    2017-01-01

    Communication media have become the primary way of interaction thanks to the discovery and innovation of many new technologies. One of the most widely used communication systems today is video streaming, which is constantly evolving. Such communications are a good alternative to face-to-face meetings, and are therefore very useful for coping with many problems caused by distance. However, they suffer from different issues such as bandwidth limitation, network congestion, energy efficiency, cost, reliability and connectivity. Hence, the quality of service and the quality of experience are considered the two most important issues for this type of communication. This work presents a complete comparative study of two of the most used protocols of video streaming, Real Time Streaming Protocol (RTSP) and the Web Real-Time Communication (WebRTC). In addition, this paper proposes two new mobile applications that implement those protocols in Android whose objective is to know how they are influenced by the aspects that most affect the streaming quality of service, which are the connection establishment time and the stream reception time. The new video streaming applications are also compared with the most popular video streaming applications for Android, and the experimental results of the analysis show that the developed WebRTC implementation improves the performance of the most popular video streaming applications with respect to the stream packet delay. PMID:28417949

  5. Cool Apps: Building Cryospheric Data Applications With Standards-Based Service Oriented Architecture

    NASA Astrophysics Data System (ADS)

    Collins, J. A.; Truslove, I.; Billingsley, B. W.; Oldenburg, J.; Brodzik, M.; Lewis, S.; Liu, M.

    2012-12-01

    The National Snow and Ice Data Center (NSIDC) holds a large collection of cryospheric data, and is involved in a number of informatics research and development projects aimed at improving the discoverability and accessibility of these data. To develop high-quality software in a timely manner, we have adopted a Service-Oriented Architecture (SOA) approach for our core technical infrastructure development. Data services at NSIDC are internally exposed to other tools and applications through standards-based service interfaces. These standards include OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting), various OGC (Open Geospatial Consortium) standards including WMS (Web Map Service) and WFS (Web Feature Service), ESIP (Federation of Earth Sciences Information Partners) OpenSearch, and NSIDC-specific RESTful services. By taking a standards-based approach, we are able to use off-the-shelf tools and libraries to consume, translate and broker these data services, and thus develop applications faster. Additionally, by exposing public interfaces to these services we provide valuable data services to technical collaborators; for example, NASA Reverb (http://reverb.echo.nasa.gov) uses NSIDC's WMS services. Our latest generation of web applications consume these data services directly. The most complete example of this is the Operation IceBridge Data Portal (http://nsidc.org/icebridge/portal) which depends on many of the aforementioned services, and clearly exhibits many of the advantages of building applications atop a service-oriented architecture. This presentation outlines the architectural approach and components and open standards and protocols adopted at NSIDC, demonstrates the interactions and uses of public and internal service interfaces currently powering applications including the IceBridge Data Portal, and outlines the benefits and challenges of this approach.

  6. Web Services Implementations at Land Process and Goddard Earth Sciences Distributed Active Archive Centers

    NASA Astrophysics Data System (ADS)

    Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.

    2007-12-01

    NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.

  7. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These data can be accessed through the above web services and through special NCEDC web pages.

  8. Pragmatic service development and customisation with the CEDA OGC Web Services framework

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Stephens, Ag; Lowe, Dominic

    2010-05-01

    The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. We have found this approach useful for exploring evolving standards and implementing protocol extensions to meet the requirements of operational deployments. This paper outlines how COWS is being used to implement customised WMS, WCS, WFS and WPS services in a variety of web applications from experimental prototypes to load-balanced cluster deployments serving 10-100 simultaneous users. In particular we will cover 1) The use of Climate Science Modeling Language (CSML) in complex-feature aware WMS, WCS and WFS services, 2) Extending WMS to support applications with features specific to earth system science and 3) A cluster-enabled Web Processing Service (WPS) supporting asynchronous data processing. The COWS WPS underpins all backend services in the UK Climate Projections User Interface where users can extract, plot and further process outputs from a multi-dimensional probabilistic climate model dataset. The COWS WPS supports cluster job execution, result caching, execution time estimation and user management. The COWS WMS and WCS components drive the project-specific NCEO and QESDI portals developed by the British Atmospheric Data Centre. These portals use CSML as a backend description format and implement features such as multiple WMS layer dimensions and climatology axes that are beyond the scope of general purpose GIS tools and yet vital for atmospheric science applications.

  9. Using ESO Reflex with Web Services

    NASA Astrophysics Data System (ADS)

    Järveläinen, P.; Savolainen, V.; Oittinen, T.; Maisala, S.; Ullgrén, M. Hook, R.

    2008-08-01

    ESO Reflex is a prototype graphical workflow system, based on Taverna, and primarily intended to be a flexible way of running ESO data reduction recipes along with other legacy applications and user-written tools. ESO Reflex can also readily use the Taverna Web Services features that are based on the Apache Axis SOAP implementation. Taverna is a general purpose Web Service client, and requires no programming to use such services. However, Taverna also has some restrictions: for example, no numerical types such integers. In addition the preferred binding style is document/literal wrapped, but most astronomical services publish the Axis default WSDL using RPC/encoded style. Despite these minor limitations we have created simple but very promising test VO workflow using the Sesame name resolver service at CDS Strasbourg, the Hubble SIAP server at the Multi-Mission Archive at Space Telescope (MAST) and the WESIX image cataloging and catalogue cross-referencing service at the University of Pittsburgh. ESO Reflex can also pass files and URIs via the PLASTIC protocol to visualisation tools and has its own viewer for VOTables. We picked these three Web Services to try to set up a realistic and useful ESO Reflex workflow. They also demonstrate ESO Reflex abilities to use many kind of Web Services because each of them requires a different interface. We describe each of these services in turn and comment on how it was used

  10. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    PubMed

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  11. TOKEN: Trustable Keystroke-Based Authentication for Web-Based Applications on Smartphones

    NASA Astrophysics Data System (ADS)

    Nauman, Mohammad; Ali, Tamleek

    Smartphones are increasingly being used to store personal information as well as to access sensitive data from the Internet and the cloud. Establishment of the identity of a user requesting information from smartphones is a prerequisite for secure systems in such scenarios. In the past, keystroke-based user identification has been successfully deployed on production-level mobile devices to mitigate the risks associated with naïve username/password based authentication. However, these approaches have two major limitations: they are not applicable to services where authentication occurs outside the domain of the mobile device - such as web-based services; and they often overly tax the limited computational capabilities of mobile devices. In this paper, we propose a protocol for keystroke dynamics analysis which allows web-based applications to make use of remote attestation and delegated keystroke analysis. The end result is an efficient keystroke-based user identification mechanism that strengthens traditional password protected services while mitigating the risks of user profiling by collaborating malicious web services.

  12. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    PubMed

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  13. Collaboratively Conceived, Designed and Implemented: Matching Visualization Tools with Geoscience Data Collections and Geoscience Data Collections with Visualization Tools via the ToolMatch Service.

    NASA Astrophysics Data System (ADS)

    Hoebelheinrich, N. J.; Lynnes, C.; West, P.; Ferritto, M.

    2014-12-01

    Two problems common to many geoscience domains are the difficulties in finding tools to work with a given dataset collection, and conversely, the difficulties in finding data for a known tool. A collaborative team from the Earth Science Information Partnership (ESIP) has gotten together to design and create a web service, called ToolMatch, to address these problems. The team began their efforts by defining an initial, relatively simple conceptual model that addressed the two uses cases briefly described above. The conceptual model is expressed as an ontology using OWL (Web Ontology Language) and DCterms (Dublin Core Terms), and utilizing standard ontologies such as DOAP (Description of a Project), FOAF (Friend of a Friend), SKOS (Simple Knowledge Organization System) and DCAT (Data Catalog Vocabulary). The ToolMatch service will be taking advantage of various Semantic Web and Web standards, such as OpenSearch, RESTful web services, SWRL (Semantic Web Rule Language) and SPARQL (Simple Protocol and RDF Query Language). The first version of the ToolMatch service was deployed in early fall 2014. While more complete testing is required, a number of communities besides ESIP member organizations have expressed interest in collaborating to create, test and use the service and incorporate it into their own web pages, tools and / or services including the USGS Data Catalog service, DataONE, the Deep Carbon Observatory, Virtual Solar Terrestrial Observatory (VSTO), and the U.S. Global Change Research Program. In this session, presenters will discuss the inception and development of the ToolMatch service, the collaborative process used to design, refine, and test the service, and future plans for the service.

  14. rasdaman Array Database: current status

    NASA Astrophysics Data System (ADS)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which defines request types for inserting, updating and deleting coverages. A web client, designed for both novice and experienced users, is also available for the service and its extensions. The client offers an intuitive interface that allows users to work with multi-dimensional coverages by abstracting the specifics of the standard definitions of the requests. The Web Coverage Processing Service defines a language for on-the-fly processing and filtering multi-dimensional raster coverages. rasdaman exposes this service through the WCS processing extension. Demonstrations are provided online via the Earthlook website (earthlook.org) which presents use-cases from a wide variety of application domains, using the rasdaman system as processing engine.

  15. Building Capacity for a Long-Term, in-Situ, National-Scale Phenology Monitoring Network: Successes, Challenges and Lessons Learned

    NASA Astrophysics Data System (ADS)

    Weltzin, J. F.; Browning, D. M.

    2014-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) is a national-scale science and monitoring initiative focused on phenology - the study of seasonal life-cycle events such as leafing, flowering, reproduction, and migration - as a tool to understand the response of biodiversity to environmental variation and change. USA-NPN provides a hierarchical, national monitoring framework that enables other organizations to leverage the capacity of the Network for their own applications - minimizing investment and duplication of effort - while promoting interoperability. Network participants can leverage: (1) Standardized monitoring protocols that have been broadly vetted, tested and published; (2) A centralized National Phenology Database (NPDb) for maintaining, archiving and replicating data, with standard metadata, terms-of-use, web-services, and documentation of QA/QC, plus tools for discovery, visualization and download of raw data and derived data products; and/or (3) A national in-situ, multi-taxa phenological monitoring system, Nature's Notebook, which enables participants to observe and record phenology of plants and animals - based on the protocols and information management system (IMS) described above - via either web or mobile applications. The protocols, NPDb and IMS, and Nature's Notebook represent a hierarchy of opportunities for involvement by a broad range of interested stakeholders, from individuals to agencies. For example, some organizations have adopted (e.g., the National Ecological Observatory Network or NEON) -- or are considering adopting (e.g., the Long-Term Agroecosystems Network or LTAR) -- the USA-NPN standardized protocols, but will develop their own database and IMS with web services to promote sharing of data with the NPDb. Other organizations (e.g., the Inventory and Monitoring Programs of the National Wildlife Refuge System and the National Park Service) have elected to use Nature's Notebook to support their phenological monitoring programs. We highlight the challenges and benefits of integrating phenology monitoring within existing and emerging national monitoring networks, and showcase opportunities that exist when standardized protocols are adopted and implemented to promote data interoperability and sharing.

  16. Serving Satellite Remote Sensing Data to User Community through the OGC Interoperability Protocols

    NASA Astrophysics Data System (ADS)

    di, L.; Yang, W.; Bai, Y.

    2005-12-01

    Remote sensing is one of the major methods for collecting geospatial data. Hugh amount of remote sensing data has been collected by space agencies and private companies around the world. For example, NASA's Earth Observing System (EOS) is generating more than 3 Tb of remote sensing data per day. The data collected by EOS are processed, distributed, archived, and managed by the EOS Data and Information System (EOSDIS). Currently, EOSDIS is managing several petabytes of data. All of those data are not only valuable for global change research, but also useful for local and regional application and decision makings. How to make the data easily accessible to and usable by the user community is one of key issues for realizing the full potential of these valuable datasets. In the past several years, the Open Geospatial Consortium (OGC) has developed several interoperability protocols aiming at making geospatial data easily accessible to and usable by the user community through Internet. The protocols particularly relevant to the discovery, access, and integration of multi-source satellite remote sensing data are the Catalog Service for Web (CS/W) and Web Coverage Services (WCS) Specifications. The OGC CS/W specifies the interfaces, HTTP protocol bindings, and a framework for defining application profiles required to publish and access digital catalogues of metadata for geographic data, services, and related resource information. The OGC WCS specification defines the interfaces between web-based clients and servers for accessing on-line multi-dimensional, multi-temporal geospatial coverage in an interoperable way. Based on definitions by OGC and ISO 19123, coverage data include all remote sensing images as well as gridded model outputs. The Laboratory for Advanced Information Technology and Standards (LAITS), George Mason University, has been working on developing and implementing OGC specifications for better serving NASA Earth science data to the user community for many years. We have developed the NWGISS software package that implements multiple OGC specifications, including OGC WMS, WCS, CS/W, and WFS. As a part of NASA REASON GeoBrain project, the NWGISS WCS and CS/W servers have been extended to provide operational access to NASA EOS data at data pools through OGC protocols and to make both services chainable in the web-service chaining. The extensions in the WCS server include the implementation of WCS 1.0.0 and WCS 1.0.2, and the development of WSDL description of the WCS services. In order to find the on-line EOS data resources, the CS/W server is extended at the backend to search metadata in NASA ECHO. This presentation reports those extensions and discuss lessons-learned on the implementation. It also discusses the advantage, disadvantages, and future improvement of OGC specifications, particularly the WCS.

  17. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    USDA-ARS?s Scientific Manuscript database

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  18. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    NASA Astrophysics Data System (ADS)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  19. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  20. A Security-façade Library for Virtual-observatory Software

    NASA Astrophysics Data System (ADS)

    Rixon, G.

    2009-09-01

    The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.

  1. Symmetric Key Services Markup Language (SKSML)

    NASA Astrophysics Data System (ADS)

    Noor, Arshad

    Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.

  2. Web Services and Data Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.

    2013-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest quality waveform from the archive.

  3. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.

  4. The pepATTRACT web server for blind, large-scale peptide-protein docking.

    PubMed

    de Vries, Sjoerd J; Rey, Julien; Schindler, Christina E M; Zacharias, Martin; Tuffery, Pierre

    2017-07-03

    Peptide-protein interactions are ubiquitous in the cell and form an important part of the interactome. Computational docking methods can complement experimental characterization of these complexes, but current protocols are not applicable on the proteome scale. pepATTRACT is a novel docking protocol that is fully blind, i.e. it does not require any information about the binding site. In various stages of its development, pepATTRACT has participated in CAPRI, making successful predictions for five out of seven protein-peptide targets. Its performance is similar or better than state-of-the-art local docking protocols that do require binding site information. Here we present a novel web server that carries out the rigid-body stage of pepATTRACT. On the peptiDB benchmark, the web server generates a correct model in the top 50 in 34% of the cases. Compared to the full pepATTRACT protocol, this leads to some loss of performance, but the computation time is reduced from ∼18 h to ∼10 min. Combined with the fact that it is fully blind, this makes the web server well-suited for large-scale in silico protein-peptide docking experiments. The rigid-body pepATTRACT server is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/services/pepATTRACT. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Weigel, R. S.; Narock, T. W.; McGuire, R. E.; Candey, R. M.

    2011-12-01

    The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework is a configurable service oriented framework to enable the discovery, access and analysis of data shared in a community. The SEEKR framework integrates many existing independent services through the use of web technologies and standard metadata. Services are hosted on systems by using an application server and are callable by using REpresentational State Transfer (REST) protocols. Messages and metadata are transferred with eXtensible Markup Language (XML) encoding which conform to a published XML schema. Space Physics Archive Search and Extract (SPASE) metadata is central to utilizing the services. Resources (data, documents, software, etc.) are described with SPASE and the associated Resource Identifier is used to access and exchange resources. The configurable options for the service can be set by using a web interface. Services are packaged as web application resource (WAR) files for direct deployment on application services such as Tomcat or Jetty. We discuss the composition of the SEEKR framework, how new services can be integrated and the steps necessary to deploying the framework. The SEEKR Framework emerged from NASA's Virtual Magnetospheric Observatory (VMO) and other systems and we present an overview of these systems from a SEEKR Framework perspective.

  6. Sharing Service Resource Information for Application Integration in a Virtual Enterprise - Modeling the Communication Protocol for Exchanging Service Resource Information

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroshi; Kawaguchi, Akira

    Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.

  7. PRISMATIC: Unified Hierarchical Probabilistic Verification Tool

    DTIC Science & Technology

    2011-09-01

    security protocols such as for anonymity and quantum cryptography ; and biological reaction pathways. PRISM is currently the leading probabilistic...a whole will only deadlock and fail with a probability ≤ p/2. The assumption allows us to partition the overall system verification problem into two ...run on any port using the standard HTTP protocol. In this way multiple instances of the PRISMATIC web service can respond to different requests when

  8. Web-based Traffic Noise Control Support System for Sustainable Transportation

    NASA Astrophysics Data System (ADS)

    Fan, Lisa; Dai, Liming; Li, Anson

    Traffic noise is considered as one of the major pollutions that will affect our communities in the future. This paper presents a framework of web-based traffic noise control support system (WTNCSS) for a sustainable transportation. WTNCSS is to provide the decision makers, engineers and publics a platform to efficiently access the information, and effectively making decisions related to traffic control. The system is based on a Service Oriented Architecture (SOA) which takes the advantages of the convenience of World Wide Web system with the data format of XML. The whole system is divided into different modules such as the prediction module, ontology-based expert module and dynamic online survey module. Each module of the system provides a distinct information service to the decision support center through the HTTP protocol.

  9. A Beginner's Guide to the Internet.

    ERIC Educational Resources Information Center

    McAdams, Charles A.; Nelson, Mark A.

    1995-01-01

    Maintains that the Internet offers services and opportunities for music teachers and students. Provides an overview of topics such as electronic mail, File Transfer Protocol (FTP), Gopher, and the World Wide Web (WWW). Includes two lists of music resources available on the Internet. (CFR)

  10. Web Services Provide Access to SCEC Scientific Research Application Software

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.

    2003-12-01

    Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.

  11. Semantically-enabled sensor plug & play for the sensor web.

    PubMed

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.

  12. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    PubMed Central

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  13. Applications of Multi-Channel Safety Authentication Protocols in Wireless Networks.

    PubMed

    Chen, Young-Long; Liau, Ren-Hau; Chang, Liang-Yu

    2016-01-01

    People can use their web browser or mobile devices to access web services and applications which are built into these servers. Users have to input their identity and password to login the server. The identity and password may be appropriated by hackers when the network environment is not safe. The multiple secure authentication protocol can improve the security of the network environment. Mobile devices can be used to pass the authentication messages through Wi-Fi or 3G networks to serve as a second communication channel. The content of the message number is not considered in a multiple secure authentication protocol. The more excessive transmission of messages would be easier to collect and decode by hackers. In this paper, we propose two schemes which allow the server to validate the user and reduce the number of messages using the XOR operation. Our schemes can improve the security of the authentication protocol. The experimental results show that our proposed authentication protocols are more secure and effective. In regard to applications of second authentication communication channels for a smart access control system, identity identification and E-wallet, our proposed authentication protocols can ensure the safety of person and property, and achieve more effective security management mechanisms.

  14. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  15. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  16. Australian Academic Use of the Internet.

    ERIC Educational Resources Information Center

    Applebee, Andrelyn C.; Clayton, Peter; Pascoe, Celina

    1997-01-01

    A study of academic staff at the University of Canberra (Australia) in 1995 determined use of electronic mail, Telnet, file transfer protocol (FTP) software, World Wide Web, library and document delivery services, discussion groups, and student communication. Examined demographic characteristics of faculty (discipline, employment status, gender,…

  17. Global Federation of Data Services in Seismology: Extending the Concept to Interdisciplinary Science

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Trabant, C. M.; Stults, M.; Van Fossen, M.

    2015-12-01

    The International Federation of Digital Seismograph Networks (FDSN) sets international standards, formats, and access protocols for global seismology. Recently the availability of an FDSN standard for web services has enabled the development of a federated model of data access. With a growing number of internationally distributed data centers supporting identical web services the task of federation is now fully realizable. This presentation will highlight the advances the seismological community has made in the past year towards federated access to seismological data including waveforms, earthquake event catalogs, and metadata describing seismic stations. As part of the NSF EarthCube project, IRIS and its partners have been extending the concept of standard web services to other domains. Our primary partners include Lamont Doherty Earth Observatory (marine geophysics), Caltech (tectonic plate reconstructions), SDSC (hydrology), UNAVCO (geodesy), and Unidata (atmospheric sciences). Additionally IRIS is working with partners at NOAA's NGDC, NEON, UTEP, WOVODAT, Intermagnet, Global Geodynamics Program, and the Ocean Observatory Initiative (OOI) to develop web services for those domains. The ultimate goal is to allow discovery, access, and utilization of cross-domain data sources. IRIS and a variety of US and European partners have been involved in the Cooperation between Europe and the US (CoopEUS) project where interdisciplinary data integration is a key topic.

  18. MOWServ: a web client for integration of bioinformatic resources

    PubMed Central

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  19. MOWServ: a web client for integration of bioinformatic resources.

    PubMed

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J; Claros, M Gonzalo; Trelles, Oswaldo

    2010-07-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user's tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/.

  20. A radiology department intranet: development and applications.

    PubMed

    Willing, S J; Berland, L L

    1999-01-01

    An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.

  1. A service-based framework for pharmacogenomics data integration

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong

    2010-08-01

    Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.

  2. Data Strategies to Support Automated Multi-Sensor Data Fusion in a Service Oriented Architecture

    DTIC Science & Technology

    2008-06-01

    and employ vast quantities of content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the...UDDI), Simple Order Access Protocol (SOAP), Java, Maritime Domain Awareness (MDA), Business Process Execution Language for Web Service (BPEL4WS) 16...content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the development of a distributed

  3. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  4. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  5. Increasing the availability and usability of terrestrial ecology data through geospatial Web services and visualization tools (Invited)

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2010-12-01

    Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.

  6. Co-creating and Evaluating a Web-app Mapping Real-World Health Care Services for Students: The servi-Share Protocol

    PubMed Central

    Langlois, Emmanuel; Wittwer, Jérôme; Tzourio, Christophe

    2017-01-01

    Background University students aged 18-30 years are a population group reporting low access to health care services, with high rates of avoidance and delay of medical care. This group also reports not having appropriate information about available health care services. However, university students are at risk for several health problems, and regular medical consultations are recommended in this period of life. New digital devices are popular among the young, and Web-apps can be used to facilitate easy access to information regarding health care services. A small number of electronic health (eHealth) tools have been developed with the purpose of displaying real-world health care services, and little is known about how such eHealth tools can improve access to care. Objective This paper describes the processes of co-creating and evaluating the beta version of a Web-app aimed at mapping and describing free or low-cost real-world health care services available in the Bordeaux area of France, which is specifically targeted to university students. Methods The co-creation process involves: (1) exploring the needs of students to know and access real-world health care services; (2) identifying the real-world health care services of interest for students; and (3) deciding on a user interface, and developing the beta version of the Web-app. Finally, the evaluation process involves: (1) testing the beta version of the Web-app with the target audience (university students aged 18-30 years); (2) collecting their feedback via a satisfaction survey; and (3) planning a long-term evaluation. Results The co-creation process of the beta version of the Web-app was completed in August 2016 and is described in this paper. The evaluation process started on September 7, 2016. The project was completed in December 2016 and implementation of the Web-app is ongoing. Conclusions Web-apps are an innovative way to increase the health literacy of young people in terms of delivery of and access to health care. The creation of Web-apps benefits from the involvement of stakeholders (eg, students and health care providers) to correctly identify the real-world health care services to be displayed. PMID:28209561

  7. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  8. Framework for ReSTful Web Services in OSGi

    NASA Technical Reports Server (NTRS)

    Shams, Khawaja S.; Norris, Jeffrey S.; Powell, Mark W.; Crockett, Thomas M.; Mittman, David S.; Fox, Jason M.; Joswig, Joseph C.; Wallick, Michael N.; Torres, Recaredo J.; Rabe, Kenneth

    2009-01-01

    Ensemble ReST is a software system that eases the development, deployment, and maintenance of server-side application programs to perform functions that would otherwise be performed by client software. Ensemble ReST takes advantage of the proven disciplines of ReST (Representational State Transfer. ReST leverages the standardized HTTP protocol to enable developers to offer services to a diverse variety of clients: from shell scripts to sophisticated Java application suites

  9. Use of ebRIM-based CSW with sensor observation services for registry and discovery of remote-sensing observations

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Gong, Jianya; Wei, Yaxing

    2009-02-01

    Recent advances in Sensor Web geospatial data capture, such as high-resolution in satellite imagery and Web-ready data processing and modeling technologies, have led to the generation of large numbers of datasets from real-time or near real-time observations and measurements. Finding which sensor or data complies with criteria such as specific times, locations, and scales has become a bottleneck for Sensor Web-based applications, especially remote-sensing observations. In this paper, an architecture for use of the integration Sensor Observation Service (SOS) with the Open Geospatial Consortium (OGC) Catalogue Service-Web profile (CSW) is put forward. The architecture consists of a distributed geospatial sensor observation service, a geospatial catalogue service based on the ebXML Registry Information Model (ebRIM), SOS search and registry middleware, and a geospatial sensor portal. The SOS search and registry middleware finds the potential SOS, generating data granule information and inserting the records into CSW. The contents and sequence of the services, the available observations, and the metadata of the observations registry are described. A prototype system is designed and implemented using the service middleware technology and a standard interface and protocol. The feasibility and the response time of registry and retrieval of observations are evaluated using a realistic Earth Observing-1 (EO-1) SOS scenario. Extracting information from SOS requires the same execution time as record generation for CSW. The average data retrieval response time in SOS+CSW mode is 17.6% of that of the SOS-alone mode. The proposed architecture has the more advantages of SOS search and observation data retrieval than the existing sensor Web enabled systems.

  10. Why You Should Establish a Connection to the Internet.

    ERIC Educational Resources Information Center

    Hill, Judy A.; Misic, Mark M.

    1996-01-01

    Provides the rationale for establishing a connection to the Internet. Describes Internet services, including e-mail, telnet, file transfer protocol (FTP), USENET, gopher, Archie, and World Wide Web. Identifies reasons why the Internet is a valuable tool. Outlines steps for establishing a connection and discusses the future of the Internet. A…

  11. Security Implications of OPC, OLE, DCOM, and RPC in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2006-01-01

    OPC is a collection of software programming standards and interfaces used in the process control industry. It is intended to provide open connectivity and vendor equipment interoperability. The use of OPC technology simplifies the development of control systems that integrate components from multiple vendors and support multiple control protocols. OPC-compliant products are available from most control system vendors, and are widely used in the process control industry. OPC was originally known as OLE for Process Control; the first standards for OPC were based on underlying services in the Microsoft Windows computing environment. These underlying services (OLE [Object Linking and Embedding],more » DCOM [Distributed Component Object Model], and RPC [Remote Procedure Call]) have been the source of many severe security vulnerabilities. It is not feasible to automatically apply vendor patches and service packs to mitigate these vulnerabilities in a control systems environment. Control systems using the original OPC data access technology can thus inherit the vulnerabilities associated with these services. Current OPC standardization efforts are moving away from the original focus on Microsoft protocols, with a distinct trend toward web-based protocols that are independent of any particular operating system. However, the installed base of OPC equipment consists mainly of legacy implementations of the OLE for Process Control protocols.« less

  12. Research of marine sensor web based on SOA and EDA

    NASA Astrophysics Data System (ADS)

    Jiang, Yongguo; Dou, Jinfeng; Guo, Zhongwen; Hu, Keyong

    2015-04-01

    A great deal of ocean sensor observation data exists, for a wide range of marine disciplines, derived from in situ and remote observing platforms, in real-time, near-real-time and delayed mode. Ocean monitoring is routinely completed using sensors and instruments. Standardization is the key requirement for exchanging information about ocean sensors and sensor data and for comparing and combining information from different sensor networks. One or more sensors are often physically integrated into a single ocean `instrument' device, which often brings in many challenges related to diverse sensor data formats, parameters units, different spatiotemporal resolution, application domains, data quality and sensors protocols. To face these challenges requires the standardization efforts aiming at facilitating the so-called Sensor Web, which making it easy to provide public access to sensor data and metadata information. In this paper, a Marine Sensor Web, based on SOA and EDA and integrating the MBARI's PUCK protocol, IEEE 1451 and OGC SWE 2.0, is illustrated with a five-layer architecture. The Web Service layer and Event Process layer are illustrated in detail with an actual example. The demo study has demonstrated that a standard-based system can be built to access sensors and marine instruments distributed globally using common Web browsers for monitoring the environment and oceanic conditions besides marine sensor data on the Web, this framework of Marine Sensor Web can also play an important role in many other domains' information integration.

  13. Earth Science Mining Web Services

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.

    2008-12-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  14. Earth Science Mining Web Services

    NASA Technical Reports Server (NTRS)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  15. Design Drivers of Water Data Services

    NASA Astrophysics Data System (ADS)

    Valentine, D.; Zaslavsky, I.

    2008-12-01

    The CUAHSI Hydrologic Information System (HIS) is being developed as a geographically distributed network of hydrologic data sources and functions that are integrated using web services so that they function as a connected whole. The core of the HIS service-oriented architecture is a collection of water web services, which provide uniform access to multiple repositories of observation data. These services use SOAP protocols communicating WaterML (Water Markup Language). When a client makes a data or metadata request using a CUAHSI HIS web service, these requests are made in standard manner, following the CUAHSI HIS web service signatures - regardless of how the underlying data source may be organized. Also, regardless of the format in which the data are returned by the source, the web services respond to requests by returning the data in a standard format of WaterML. The goal of WaterML design has been to capture semantics of hydrologic observations discovery and retrieval and express the point observations information model as an XML schema. To a large extent, it follows the representation of the information model as adopted by the CUASHI Observations Data Model (ODM) relational design. Another driver of WaterML design is specifications and metadata adopted by USGS NWIS, EPA STORET, and other federal agencies, as it seeks to provide a common foundation for exchanging both agency data and data collected in multiple academic projects. Another WaterML design principle was to create, in version 1 of HIS in particular, a fairly rigid and simple XML schema which is easy to generate and parse, thus creating the least barrier for adoption by hydrologists. WaterML includes a series of elements that reflect common notions used in describing hydrologic observations, such as site, variable, source, observation series, seriesCatalog, and data values. Each of the three main request methods in the water web services - GetSiteInfo, GetVariableInfo, and GetValues - has a corresponding response element in WaterML: SitesResponse, VariableResponse, and TimeSeriesResponse. The WaterML specification is being adopted by federal agencies. The experimental USGS NWIS Daily Values web service returns WaterML-compliant TImeSeriesResponse. The National Climatic Data Center is also prototyping WaterML for data delivery, and has developed a REST-based service that generates WaterML- compliant output for the NCDC ASOS network. Such agency-supported web services coming online provide a much more efficient way to deliver agency data compared to the web site scraper services that the CUAHSI HIS project has developed initially. The CUAHSI water data web services will continue to serve as the main communication mechanism within CUAHSI HIS, connecting a variety of data sources with a growing set of web service clients being developed in both academia and the commercial sector. The driving forces for the development of web services continue to be: - Application experience and needs of the growing number of CUAHSI HIS users, who experiment with additional data types, analysis modes, data browsing and searching strategies, and provide feedback to WaterML developers; - Data description requirements posed by various federal and state agencies; - Harmonization with standards being adopted or developed in neighboring communities, in particular the relevant standards being explored within the Open Geospatial Consortium. CUAHSI WaterML is a standard output schema for CUAHSI HIS water web services. Its formal specification is available as OGC discussion paper at www.opengeospatial.org/standards/dp/ class="ab'>

  16. Data Sets and Data Services at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2014-12-01

    The Northern California Earthquake Data Center (NCEDC) houses a unique and comprehensive data archive and provides real-time services for a variety of seismological and geophysical data sets that encompass northern and central California. We have over 80 terabytes of continuous and event-based time series data from broadband, short-period, strong motion, and strain sensors as well as continuous and campaign GPS data at both standard and high sample rates in both raw and RINEX format. The Northen California Seismic System (NCSS), operated by UC Berkeley and USGS Menlo Park, has recorded over 890,000 events from 1984 to the present, and the NCEDC provides catalog, parametric information, moment tensors and first motion mechanisms, and time series data for these events. We also host and provide event catalogs, parametric information, and event waveforms for DOE enhanced geothermal system monitoring in northern California and Nevada. The NCEDC provides a variety of ways for users to access these data. The most recent development are web services, which provide interactive, command-line, or program-based workflow access to data. Web services use well-established server and client protocols and RESTful software architecture that allow users to easily submit queries and receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC supports all FDSN-defined web services as well as a number of IRIS-defined and NCEDC-defined services. We also continue to support older email-based and browser-based access to data. NCEDC data and web services can be found at http://www.ncedc.org and http://service.ncedc.org.

  17. Northern California Earthquake Data Center: Data Sets and Data Services

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Allen, R. M.; Zuzlewski, S.

    2015-12-01

    The Northern California Earthquake Data Center (NCEDC) provides a permanent archive and real-time data distribution services for a unique and comprehensive data set of seismological and geophysical data sets encompassing northern and central California. We provide access to over 85 terabytes of continuous and event-based time series data from broadband, short-period, strong motion, and strain sensors as well as continuous and campaign GPS data at both standard and high sample rates. The Northen California Seismic System (NCSS), operated by UC Berkeley and USGS Menlo Park, has recorded over 900,000 events from 1984 to the present, and the NCEDC serves catalog, parametric information, moment tensors and first motion mechanisms, and time series data for these events. We also serve event catalogs, parametric information, and event waveforms for DOE enhanced geothermal system monitoring in northern California and Nevada. The NCEDC provides a several ways for users to access these data. The most recent development are web services, which provide interactive, command-line, or program-based workflow access to data. Web services use well-established server and client protocols and RESTful software architecture that allow users to easily submit queries and receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC supports all FDSN-defined web services as well as a number of IRIS-defined and NCEDC-defined services. We also continue to support older email-based and browser-based access to data. NCEDC data and web services can be found at http://www.ncedc.org and http://service.ncedc.org.

  18. Rail-dbGaP: analyzing dbGaP-protected data in the cloud with Amazon Elastic MapReduce.

    PubMed

    Nellore, Abhinav; Wilks, Christopher; Hansen, Kasper D; Leek, Jeffrey T; Langmead, Ben

    2016-08-15

    Public archives contain thousands of trillions of bases of valuable sequencing data. More than 40% of the Sequence Read Archive is human data protected by provisions such as dbGaP. To analyse dbGaP-protected data, researchers must typically work with IT administrators and signing officials to ensure all levels of security are implemented at their institution. This is a major obstacle, impeding reproducibility and reducing the utility of archived data. We present a protocol and software tool for analyzing protected data in a commercial cloud. The protocol, Rail-dbGaP, is applicable to any tool running on Amazon Web Services Elastic MapReduce. The tool, Rail-RNA v0.2, is a spliced aligner for RNA-seq data, which we demonstrate by running on 9662 samples from the dbGaP-protected GTEx consortium dataset. The Rail-dbGaP protocol makes explicit for the first time the steps an investigator must take to develop Elastic MapReduce pipelines that analyse dbGaP-protected data in a manner compliant with NIH guidelines. Rail-RNA automates implementation of the protocol, making it easy for typical biomedical investigators to study protected RNA-seq data, regardless of their local IT resources or expertise. Rail-RNA is available from http://rail.bio Technical details on the Rail-dbGaP protocol as well as an implementation walkthrough are available at https://github.com/nellore/rail-dbgap Detailed instructions on running Rail-RNA on dbGaP-protected data using Amazon Web Services are available at http://docs.rail.bio/dbgap/ : anellore@gmail.com or langmea@cs.jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  19. OC ToGo: bed site image integration into OpenClinica with mobile devices

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Gehlen, Johan; Jonas, Stephan; Deserno, Thomas M.

    2014-03-01

    Imaging and image-based measurements nowadays play an essential role in controlled clinical trials, but electronic data capture (EDC) systems insufficiently support integration of captured images by mobile devices (e.g. smartphones and tablets). The web application OpenClinica has established as one of the world's leading EDC systems and is used to collect, manage and store data of clinical trials in electronic case report forms (eCRFs). In this paper, we present a mobile application for instantaneous integration of images into OpenClinica directly during examination on patient's bed site. The communication between the Android application and OpenClinica is based on the simple object access protocol (SOAP) and representational state transfer (REST) web services for metadata, and secure file transfer protocol (SFTP) for image transfer, respectively. OpenClinica's web services are used to query context information (e.g. existing studies, events and subjects) and to import data into the eCRF, as well as export of eCRF metadata and structural information. A stable image transfer is ensured and progress information (e.g. remaining time) visualized to the user. The workflow is demonstrated for a European multi-center registry, where patients with calciphylaxis disease are included. Our approach improves the EDC workflow, saves time, and reduces costs. Furthermore, data privacy is enhanced, since storage of private health data on the imaging devices becomes obsolete.

  20. Formats and Network Protocols for Browser Access to 2D Raster Data

    NASA Astrophysics Data System (ADS)

    Plesea, L.

    2015-12-01

    Tiled web maps in browsers are a major success story, forming the foundation of many current web applications. Enabling tiled data access is the next logical step, and is likely to meet with similar success. Many ad-hoc approaches have already started to appear, and something similar is explored within the Open Geospatial Consortium. One of the main obstacles in making browser data access a reality is the lack of a well-known data format. This obstacle also represents an opportunity to analyze the requirements and possible candidates, applying lessons learned from web tiled image services and protocols. Similar to the image counterpart, a web tile raster data format needs to have good intrinsic compression and be able to handle high byte count data types including floating point. An overview of a possible solution to the format problem, a 2D data raster compression algorithm called Limited Error Raster Compression (LERC) will be presented. In addition to the format, best practices for high request rate HTTP services also need to be followed. In particular, content delivery network (CDN) caching suitability needs to be part of any design, not an after-thought. Last but not least, HTML 5 browsers will certainly be part of any solution since they provide improved access to binary data, as well as more powerful ways to view and interact with the data in the browser. In a simple but relevant application, digital elevation model (DEM) raster data is served as LERC compressed data tiles which are used to generate terrain by a HTML5 scene viewer.

  1. Protocol for a Randomized Controlled Trial of Proactive Web-Based Versus Telephone-Based Information and Support: Can Electronic Platforms Deliver Effective Care for Lung Cancer Patients?

    PubMed

    Paul, Christine L; Boyes, Allison W; O'Brien, Lorna; Baker, Amanda L; Henskens, Frans A; Roos, Ian; Clinton-McHarg, Tara; Bellamy, Douglas; Colburn, Glenda; Rose, Shiho; Cox, Martine E; Fradgley, Elizabeth A; Baird, Hannah; Barker, Daniel

    2016-10-26

    Community-based services such as telephone support lines can provide valuable informational, emotional, and practical support for cancer patients via telephone- or Web-based (live chat or email) platforms. However, very little rigorous research has examined the efficacy of such services in improving patient outcomes. This study will determine whether: proactive telephone or Web-delivered support produces outcomes superior to printed information; and Web-delivered support produces outcomes comparable to telephone support. A consecutive sample of 501 lung cancer outpatients will be recruited from 50 Australian health services to participate in a patient-randomized controlled trial (RCT). Eligible individuals must: be 18 years or older; have received a lung cancer diagnosis (including mesothelioma) within the previous 4 months; have an approximate life expectancy of at least 6 months; and have Internet access. Participants will be randomly allocated to receive: (1) an information booklet, (2) proactive telephone support, or (3) proactive Web support, chat, and/or email. The primary patient outcomes will be measured by the General Health Questionnaire (GHQ-12) and Health Education and Impact Questionnaire (heiQ) at 3 and 6 months post recruitment. The acceptability of proactive recruitment strategies will also be assessed. It is hypothesized that participants receiving telephone or Web support will report reduced distress (GHQ-12 scores that are 0.3 standard deviations (SD) lower) and greater self-efficacy (heiQ scores that are 0.3 SDs higher) than participants receiving booklets. Individuals receiving Web support will report heiQ scores within 0.29 SDs of individuals receiving telephone support. If proven effective, electronic approaches such as live-chat and email have the potential to increase the accessibility and continuity of supportive care delivered by community-based services. This evidence may also inform the redesigning of helpline-style services to be effective and responsive to patient needs.

  2. Current experiences with internet telepathology and possible evolution in the next generation of Internet services.

    PubMed

    Della Mea, V; Beltrami, C A

    2000-01-01

    The last five years experience has definitely demonstrated the possible applications of the Internet for telepathology. They may be listed as follows: (a) teleconsultation via multimedia e-mail; (b) teleconsultation via web-based tools; (c) distant education by means of World Wide Web; (d) virtual microscope management through Web and Java interfaces; (e) real-time consultations through Internet-based videoconferencing. Such applications have led to the recognition of some important limits of the Internet, when dealing with telemedicine: (i) no guarantees on the quality of service (QoS); (ii) inadequate security and privacy; (iii) for some countries, low bandwidth and thus low responsiveness for real-time applications. Currently, there are several innovations in the world of the Internet. Different initiatives have been aimed at an amelioration of the Internet protocols, in order to have quality of service, multimedia support, security and other advanced services, together with greater bandwidth. The forthcoming Internet improvements, although induced by electronic commerce, video on demand, and other commercial needs, are of real interest also for telemedicine, because they solve the limits currently slowing down the use of Internet. When such new services will be available, telepathology applications may switch from research to daily practice in a fast way.

  3. GeoNetwork powered GI-cat: a geoportal hybrid solution

    NASA Astrophysics Data System (ADS)

    Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo

    2010-05-01

    To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.

  4. A Prototype Web-based system for GOES-R Space Weather Data

    NASA Astrophysics Data System (ADS)

    Sundaravel, A.; Wilkinson, D. C.

    2010-12-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application framework based on the Groovy language. We are also making use of the THREDDS (Thematic Realtime Environmental Distributed Data Services) server to publish and access the NetCDF files. We have completed developing software tools to generate NetCDF and CSV data files and also tools to translate NetCDF to CSV. The current phase of the project involves in designing and developing the web interface.

  5. VISIBIOweb: visualization and layout services for BioPAX pathway models

    PubMed Central

    Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur

    2010-01-01

    With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470

  6. An Interface Transformation Strategy for AF-IPPS

    DTIC Science & Technology

    2012-12-01

    Representational State Transfer (REST) and Java Enterprise Edition ( Java EE) to implement a reusable “translation service.” For SOAP and REST protocols, XML and...of best-of-breed open source software. The product baseline is summarized in the following table: Product Function Description Java Language...Compiler & Runtime JBoss Application Server Applications, Messaging, Translation Java EE Application Server Ruby on Rails Applications Ruby Web

  7. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  8. Connecting long-tail scientists with big data centers using SaaS

    NASA Astrophysics Data System (ADS)

    Percivall, G. S.; Bermudez, L. E.

    2012-12-01

    Big data centers and long tail scientists represent two extremes in the geoscience research community. Interoperability and inter-use based on software-as-a-service (SaaS) increases access to big data holdings by this underserved community of scientists. Large, institutional data centers have long been recognized as vital resources in the geoscience community. Permanent data archiving and dissemination centers provide "access to the data and (are) a critical source of people who have experience in the use of the data and can provide advice and counsel for new applications." [NRC] The "long-tail of science" is the geoscience researchers that work separate from institutional data centers [Heidorn]. Long-tail scientists need to be efficient consumers of data from large, institutional data centers. Discussions in NSF EarthCube capture the challenges: "Like the vast majority of NSF-funded researchers, Alice (a long-tail scientist) works with limited resources. In the absence of suitable expertise and infrastructure, the apparently simple task that she assigns to her graduate student becomes an information discovery and management nightmare. Downloading and transforming datasets takes weeks." [Foster, et.al.] The long-tail metaphor points to methods to bridge the gap, i.e., the Web. A decade ago, OGC began building a geospatial information space using open, web standards for geoprocessing [ORM]. Recently, [Foster, et.al.] accurately observed that "by adopting, adapting, and applying semantic web and SaaS technologies, we can make the use of geoscience data as easy and convenient as consumption of online media." SaaS places web services into Cloud Computing. SaaS for geospatial is emerging rapidly building on the first-generation geospatial web, e.g., OGC Web Coverage Service [WCS] and the Data Access Protocol [DAP]. Several recent examples show progress in applying SaaS to geosciences: - NASA's Earth Data Coherent Web has a goal to improve science user experience using Web Services (e.g. W*S, SOAP, RESTful) to reduce barriers to using EOSDIS data [ECW]. - NASA's LANCE provides direct access to vast amounts of satellite data using the OGC Web Map Tile Service (WMTS). - NOAA's Unified Access Framework for Gridded Data (UAF Grid) is a web service based capability for direct access to a variety of datasets using netCDF, OPeNDAP, THREDDS, WMS and WCS. [UAF] Tools to access SaaS's are many and varied: some proprietary, others open source; some run in browsers, others are stand-alone applications. What's required is interoperability using web interfaces offered by the data centers. NOAA's UAF service stack supports Matlab, ArcGIS, Ferret, GrADS, Google Earth, IDV, LAS. Any SaaS that offers OGC Web Services (WMS, WFS, WCS) can be accessed by scores of clients [OGC]. While there has been much progress in the recent year toward offering web services for the long-tail of scientists, more needs to be done. Web services offer data access but more than access is needed for inter-use of data, e.g. defining data schemas that allow for data fusion, addressing coordinate systems, spatial geometry, and semantics for observations. Connecting long-tail scientists with large, data centers using SaaS and, in the future, semantic web, will address this large and currently underserved user community.

  9. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.

  10. Trends in ecosystem service research: early steps and current drivers.

    PubMed

    Vihervaara, Petteri; Rönkä, Mia; Walls, Mari

    2010-06-01

    Over the past 50 years, human beings have influenced ecosystems more rapidly than at any similar time in human history, drastically altering ecosystem functioning. Along with ecosystem transformation and degradation, a number of studies have addressed the functioning, assessment and management of ecosystems. The concept of ecosystem services has been developed in the scientific literature since the end of the 1970s. However, ecosystem service research has focused on certain service categories, ecosystem types, and geographical areas, while substantial knowledge gaps remain concerning several aspects. We assess the development and current status of ecosystem service research on the basis of publications collected from the Web of Science. The material consists of (1) articles (n = 353) from all the years included in the Web of Science down to the completion of the Millennium Ecosystem Assessment and (2) more recent articles (n = 687) published between 2006 and 2008. We also assess the importance of international processes, such as the Convention on Biological Diversity, the Kyoto Protocol and the Millennium Ecosystem Assessment, as drivers of ecosystem service research. Finally, we identify future prospects and research needs concerning the assessment and management of ecosystem services.

  11. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials

    PubMed Central

    2012-01-01

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols. PMID:22595088

  12. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials.

    PubMed

    Korkontzelos, Ioannis; Mu, Tingting; Ananiadou, Sophia

    2012-04-30

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols.

  13. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  14. A Formal Semantics for the WS-BPEL Recovery Framework

    NASA Astrophysics Data System (ADS)

    Dragoni, Nicola; Mazzara, Manuel

    While current studies on Web services composition are mostly focused - from the technical viewpoint - on standards and protocols, this work investigates the adoption of formal methods for dependable composition. The Web Services Business Process Execution Language (WS-BPEL) - an OASIS standard widely adopted both in academic and industrial environments - is considered as a touchstone for concrete composition languages and an analysis of its ambiguous Recovery Framework specification is offered. In order to show the use of formal methods, a precise and unambiguous description of its (simplified) mechanisms is provided by means of a conservative extension of the π-calculus. This has to be intended as a well known case study providing methodological arguments for the adoption of formal methods in software specification. The aspect of verification is not the main topic of the paper but some hints are given.

  15. Interoperability In The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Rios, C.; Barbarisi, I.; Docasal, R.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; Grotheer, E.; Besse, S.; Martinez, S.; Heather, D.; De Marchi, G.; Lim, T.; Fraga, D.; Barthelemy, M.

    2015-12-01

    As the world becomes increasingly interconnected, there is a greater need to provide interoperability with software and applications that are commonly being used globally. For this purpose, the development of the new Planetary Science Archive (PSA), by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is focused on building a modern science archive that takes into account internationally recognised standards in order to provide access to the archive through tools from third parties, for example by the NASA Planetary Data System (PDS), the VESPA project from the Virtual Observatory of Paris as well as other international institutions. The protocols and standards currently being supported by the new Planetary Science Archive at this time are the Planetary Data Access Protocol (PDAP), the EuroPlanet-Table Access Protocol (EPN-TAP) and Open Geospatial Consortium (OGC) standards. The architecture of the PSA consists of a Geoserver (an open-source map server), the goal of which is to support use cases such as the distribution of search results, sharing and processing data through a OGC Web Feature Service (WFS) and a Web Map Service (WMS). This server also allows the retrieval of requested information in several standard output formats like Keyhole Markup Language (KML), Geography Markup Language (GML), shapefile, JavaScript Object Notation (JSON) and Comma Separated Values (CSV), among others. The provision of these various output formats enables end-users to be able to transfer retrieved data into popular applications such as Google Mars and NASA World Wind.

  16. RSAT 2018: regulatory sequence analysis tools 20th anniversary.

    PubMed

    Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane

    2018-05-02

    RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.

  17. Implementation of an EPN-TAP Service to Improve Accessibility to the Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Macfarlane, A.; Barabarisi, I.; Docasal, R.; Rios, C.; Saiz, J.; Vallejo, F.; Martinez, S.; Arviset, C.; Besse, S.; Vallat, C.

    2017-09-01

    The re-engineered PSA has a focus on improved access and search-ability to ESA's planetary science data. In addition to the new web interface released in January 2017, the new PSA supports several common planetary protocols in order to increase the visibility and ways in which the data may be queried and retrieved. Work is on-going to provide an EPN-TAP service covering as wide a range of parameters as possible to facilitate the discovery of scientific data and interoperability of the archive.

  18. Interoperable Data Access Services for NOAA IOOS

    NASA Astrophysics Data System (ADS)

    de La Beaujardiere, J.

    2008-12-01

    The Integrated Ocean Observing System (IOOS) is intended to enhance our ability to collect, deliver, and use ocean information. The goal is to support research and decision-making by providing data on our open oceans, coastal waters, and Great Lakes in the formats, rates, and scales required by scientists, managers, businesses, governments, and the public. The US National Oceanic and Atmospheric Administration (NOAA) is the lead agency for IOOS. NOAA's IOOS office supports the development of regional coastal observing capability and promotes data management efforts to increase data accessibility. Geospatial web services have been established at NOAA data providers including the National Data Buoy Center (NDBC), the Center for Operational Oceanographic Products and Services (CO-OPS), and CoastWatch, and at regional data provider sites. Services established include Open-source Project for a Network Data Access Protocol (OpenDAP), Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), and OGC Web Coverage Service (WCS). These services provide integrated access to data holdings that have been aggregated at each center from multiple sources. We wish to collaborate with other groups to improve our service offerings to maximize interoperability and enhance cross-provider data integration, and to share common service components such as registries, catalogs, data conversion, and gateways. This paper will discuss the current status of NOAA's IOOS efforts and possible next steps.

  19. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  20. Towards Semantic Web Services on Large, Multi-Dimensional Coverages

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2009-04-01

    Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context of OGC standardization. The author is co-chair of OGC's WCS Working Group (WG) and Coverages WG.

  1. ERDDAP - An Easier Way for Diverse Clients to Access Scientific Data From Diverse Sources

    NASA Astrophysics Data System (ADS)

    Mendelssohn, R.; Simons, R. A.

    2008-12-01

    ERDDAP is a new open-source, web-based service that aggregates data from other web services: OPeNDAP grid servers (THREDDS), OPeNDAP sequence servers (Dapper), NOS SOAP service, SOS (IOOS, OOStethys), microWFS, DiGIR (OBIS, BMDE). Regardless of the data source, ERDDAP makes all datasets available to clients via standard (and enhanced) DAP requests and makes some datasets accessible via WMS. A client's request also specifies the desired format for the results, e.g., .asc, .csv, .das, .dds, .dods, htmlTable, XHTML, .mat, netCDF, .kml, .png, or .pdf (formats more directly useful to clients). ERDDAP interprets a client request, requests the data from the data source (in the appropriate way), reformats the data source's response, and sends the result to the client. Thus ERDDAP makes data from diverse sources available to diverse clients via standardized interfaces. Clients don't have to install libraries to get data from ERDDAP because ERDDAP is RESTful and resource-oriented: a URL completely defines a data request and the URL can be used in any application that can send a URL and receive a file. This also makes it easy to use ERDDAP in mashups with other web services. ERDDAP could be extended to support other protocols. ERDDAP's hub and spoke architecture simplifies adding support for new types of data sources and new types of clients. ERDDAP includes metadata management support, catalog services, and services to make graphs and maps.

  2. Data Integration Using SOAP in the VSO

    NASA Astrophysics Data System (ADS)

    Tian, K. Q.; Bogart, R. S.; Davey, A.; Dimitoglou, G.; Gurman, J. B.; Hill, F.; Martens, P. C.; Wampler, S.

    2003-05-01

    The Virtual Solar Observatory (VSO) project has implemented a time interval search for all four participating data archives. The back-end query services are implemented as web services, and are accessible via SOAP. SOAP (Simple Object Access Protocol) defines an RPC (Remote Procedure Call) mechanism that employs HTTP as its transport and encodes the client-server interactions (request and response messages) in XML (eXtensible Markup Language) documents. In addition to its core function of identifying relevant datasets in the local archive, the SOAP server at each data provider acts as a "wrapper" that maps descriptions in an abstract data model to those in the provider-specific data model, and vice versa. It is in this way that VSO integrates heterogeneous data services and allows access to them using a common interface. Our experience with SOAP has been fruitful. It has proven to be a better alternative to traditional web access methods, namely POST and GET, because of its flexibility and interoperability.

  3. Web Coverage Service Challenges for NASA's Earth Science Data

    NASA Technical Reports Server (NTRS)

    Cantrell, Simon; Khan, Abdul; Lynnes, Christopher

    2017-01-01

    In an effort to ensure that data in NASA's Earth Observing System Data and Information System (EOSDIS) is available to a wide variety of users through the tools of their choice, NASA continues to focus on exposing data and services using standards based protocols. Specifically, this work has focused recently on the Web Coverage Service (WCS). Experience has been gained in data delivery via GetCoverage requests, starting out with WCS v1.1.1. The pros and cons of both the version itself and different implementation approaches will be shared during this session. Additionally, due to limitations with WCS v1.1.1 ability to work with NASA's Earth science data, this session will also discuss the benefit of migrating to WCS 2.0.1 with EO-x to enrich this capability to meet a wide range of anticipated user's needs This will enable subsetting and various types of data transformations to be performed on a variety of EOS data sets.

  4. Internet protocol television for personalized home-based health information: design-based research on a diabetes education system.

    PubMed

    Gray, Kathleen Mary; Clarke, Ken; Alzougool, Basil; Hines, Carolyn; Tidhar, Gil; Frukhtman, Feodor

    2014-03-10

    The use of Internet protocol television (IPTV) as a channel for consumer health information is a relatively under-explored area of medical Internet research. IPTV may afford new opportunities for health care service providers to provide health information and for consumers, patients, and caretakers to access health information. The technologies of Web 2.0 add a new and even less explored dimension to IPTV's potential. Our research explored an application of Web 2.0 integrated with IPTV for personalized home-based health information in diabetes education, particularly for people with diabetes who are not strong computer and Internet users, and thus may miss out on Web-based resources. We wanted to establish whether this system could enable diabetes educators to deliver personalized health information directly to people with diabetes in their homes; and whether this system could encourage people with diabetes who make little use of Web-based health information to build their health literacy via the interface of a home television screen and remote control. This project was undertaken as design-based research in two stages. Stage 1 comprised a feasibility study into the technical work required to integrate an existing Web 2.0 platform with an existing IPTV system, populated with content and implemented for user trials in a laboratory setting. Stage 2 comprised an evaluation of the system by consumers and providers of diabetes information. The project succeeded in developing a Web 2.0 IPTV system for people with diabetes and low literacies and their diabetes educators. The performance of the system in the laboratory setting gave them the confidence to engage seriously in thinking about the actual and potential features and benefits of a more widely-implemented system. In their feedback they pointed out a range of critical usability and usefulness issues related to Web 2.0 affordances and learning fundamentals. They also described their experiences with the system in terms that bode well for its educational potential, and they suggested many constructive improvements to the system. The integration of Web 2.0 and IPTV merits further technical development, business modeling, and health services and health outcomes research, as a solution to extend the reach and scale of home-based health care.

  5. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide extraction workflow can be initiated through the functionality that the web service provides: (1) a segmentation of the image into spectrally homogeneous objects, (2) a classification of the objects into landslide and non-landslide areas and (3) an editing tool for the manual refinement of extracted landslide boundaries. In addition, the user interface of the web service provides tools that enable the user (4) to perform a monitoring that identifies changes between landslide maps of different points in time, (5) to perform a validation of the landslide maps by comparing them to reference data, and (6) to perform an assessment of affected infrastructure by comparing the landslide maps to respective infrastructure data. After exploring the web service functionality, the users are asked to fill in the online validation protocol in form of a questionnaire in order to provide their feedback. Concerning usability, we evaluate how intuitive the web service functionality can be operated, how well the integrated help information guides the users, and what kind of background information, e.g. remote sensing concepts and theory, is necessary for a practitioner to fully exploit the value of EO data. The feedback will be used for improving the user interface and for the implementation of additional functionality.

  6. j5 DNA assembly design automation.

    PubMed

    Hillson, Nathan J

    2014-01-01

    Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.

  7. Automated Data Quality Assurance using OGC Sensor Web Enablement Frameworks for Marine Observatories

    NASA Astrophysics Data System (ADS)

    Toma, Daniel; Bghiel, Ikram; del Rio, Joaquin; Hidalgo, Alberto; Carreras, Normandino; Manuel, Antoni

    2014-05-01

    Over the past years, environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent. Therefore, many sensor networks are increasingly deployed to monitor our environment. But due to the large number of sensor manufacturers, accompanying protocols and data encoding, automated integration and data quality assurance of diverse sensors in an observing systems is not straightforward, requiring development of data management code and manual tedious configuration. However, over the past few years it has been demonstrated that Open-Geospatial Consortium (OGC) frameworks can enable web services with fully-described sensor systems, including data processing, sensor characteristics and quality control tests and results. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The data management software which enables access to sensors, data processing and quality control tests has to be implemented and the results have to be manually mapped to the SWE models. In this contribution, we describe a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) OGC PUCK protocol - a simple standard embedded instrument protocol to store and retrieve directly from the devices the declarative description of sensor characteristics and quality control tests, (2) an automatic mechanism for data processing and quality control tests underlying the Sensor Web - the Sensor Interface Descriptor (SID) concept, as well as (3) a model for the declarative description of sensor which serves as a generic data management mechanism - designed as a profile and extension of OGC SWE's SensorML standard. We implement and evaluate our approach by applying it to the OBSEA Observatory, and can be used to demonstrate the ability to assess data quality for temperature, salinity, air pressure and wind speed and direction observations off the coast of Garraf, in the north-eastern Spain.

  8. A Query Language for Handling Big Observation Data Sets in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Autermann, Christian; Stasch, Christoph; Jirka, Simon; Koppe, Roland

    2017-04-01

    The Sensor Web provides a framework for the standardized Web-based sharing of environmental observations and sensor metadata. While the issue of varying data formats and protocols is addressed by these standards, the fast growing size of observational data is imposing new challenges for the application of these standards. Most solutions for handling big observational datasets currently focus on remote sensing applications, while big in-situ datasets relying on vector features still lack a solid approach. Conventional Sensor Web technologies may not be adequate, as the sheer size of the data transmitted and the amount of metadata accumulated may render traditional OGC Sensor Observation Services (SOS) unusable. Besides novel approaches to store and process observation data in place, e.g. by harnessing big data technologies from mainstream IT, the access layer has to be amended to utilize and integrate these large observational data archives into applications and to enable analysis. For this, an extension to the SOS will be discussed that establishes a query language to dynamically process and filter observations at storage level, similar to the OGC Web Coverage Service (WCS) and it's Web Coverage Processing Service (WCPS) extension. This will enable applications to request e.g. spatial or temporal aggregated data sets in a resolution it is able to display or it requires. The approach will be developed and implemented in cooperation with the The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research whose catalogue of data compromises marine observations of physical, chemical and biological phenomena from a wide variety of sensors, including mobile (like research vessels, aircrafts or underwater vehicles) and stationary (like buoys or research stations). Observations are made with a high temporal resolution and the resulting time series may span multiple decades.

  9. A randomised controlled trial of an online menu planning intervention to improve childcare service adherence to dietary guidelines: a study protocol

    PubMed Central

    Yoong, Sze Lin; Grady, Alice; Wiggers, John; Flood, Victoria; Rissel, Chris; Finch, Meghan; Searles, Andrew; Salajan, David; O’Rourke, Ruby; Daly, Jaqueline; Gilham, Karen; Stacey, Fiona; Fielding, Alison; Pond, Nicole; Wyse, Rebecca; Seward, Kirsty; Wolfenden, Luke

    2017-01-01

    Introduction The implementation of dietary guidelines in childcare settings is recommended to improve child public health nutrition. However, foods provided in childcare services are not consistent with guidelines. The primary aim of the trial is to assess the effectiveness of a web-based menu planning intervention in increasing the mean number of food groups on childcare service menus that comply with dietary guidelines regarding food provision to children in care. Methods and analysis A parallel group randomised controlled trial will be undertaken with 54 childcare services that provide food to children within New South Wales, Australia. Services will be randomised to a 12-month intervention or usual care. The experimental group will receive access to a web-based menu planning and decision support tool and online resources. To support uptake of the web program, services will be provided with training and follow-up support. The primary outcome will be the number of food groups, out of 6 (vegetables, fruit, breads and cereals, meat, dairy and ‘discretionary’), on the menu that meet dietary guidelines (Caring for Children) across a 1-week menu at 12-month follow-up, assessed via menu review by dietitians or nutritionists blinded to group allocation. A nested evaluation of child dietary intake in care and child body mass index will be undertaken in up to 35 randomly selected childcare services and up to 420 children aged approximately 3–6 years. Ethics and dissemination Ethical approval has been provided by Hunter New England and University of Newcastle Human Research Ethics Committees. This research will provide high-quality evidence regarding the impact of a web-based menu planning intervention in facilitating the translation of dietary guidelines into childcare services. Trial findings will be disseminated widely through national and international peer-reviewed publications and conference presentations. Trial registration Prospectively registered with Australian New Zealand Clinical Trial Registry (ANZCTR) ACTRN12616000974404. PMID:28893755

  10. Mercury Shopping Cart Interface

    NASA Technical Reports Server (NTRS)

    Pfister, Robin; McMahon, Joe

    2006-01-01

    Mercury Shopping Cart Interface (MSCI) is a reusable component of the Power User Interface 5.0 (PUI) program described in another article. MSCI is a means of encapsulating the logic and information needed to describe an orderable item consistent with Mercury Shopping Cart service protocol. Designed to be used with Web-browser software, MSCI generates Hypertext Markup Language (HTML) pages on which ordering information can be entered. MSCI comprises two types of Practical Extraction and Report Language (PERL) modules: template modules and shopping-cart logic modules. Template modules generate HTML pages for entering the required ordering details and enable submission of the order via a Hypertext Transfer Protocol (HTTP) post. Shopping cart modules encapsulate the logic and data needed to describe an individual orderable item to the Mercury Shopping Cart service. These modules evaluate information entered by the user to determine whether it is sufficient for the Shopping Cart service to process the order. Once an order has been passed from MSCI to a deployed Mercury Shopping Cart server, there is no further interaction with the user.

  11. Advances on Sensor Web for Internet of Things

    NASA Astrophysics Data System (ADS)

    Liang, S.; Bermudez, L. E.; Huang, C.; Jazayeri, M.; Khalafbeigi, T.

    2013-12-01

    'In much the same way that HTML and HTTP enabled WWW, the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE), envisioned in 2001 [1] will allow sensor webs to become a reality.'. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not a simple task. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. SWE standardizes web service interfaces, sensor descriptions and data encodings as building blocks for a Sensor Web. SWE standards are now mature specifications (version 2.0) with approved OGC compliance test suites and tens of independent implementations. Many earth and space science organizations and government agencies are using the SWE standards to publish and share their sensors and observations. While SWE has been demonstrated very effective for scientific sensors, its complexity and the computational overhead may not be suitable for resource-constrained tiny sensors. In June 2012, a new OGC Standards Working Group (SWG) was formed called the Sensor Web Interface for Internet of Things (SWE-IoT) SWG. This SWG focuses on developing one or more OGC standards for resource-constrained sensors and actuators (e.g., Internet of Things devices) while leveraging the existing OGC SWE standards. In the near future, billions to trillions of small sensors and actuators will be embedded in real- world objects and connected to the Internet facilitating a concept called the Internet of Things (IoT). By populating our environment with real-world sensor-based devices, the IoT is opening the door to exciting possibilities for a variety of application domains, such as environmental monitoring, transportation and logistics, urban informatics, smart cities, as well as personal and social applications. The current SWE-IoT development aims on modeling the IoT components and defining a standard web service that makes the observations captured by IoT devices easily accessible and allows users to task the actuators on the IoT devices. The SWE IoT model links things with sensors and reuses the OGC Observation and Model (O&M) to link sensors with features of interest and observed properties Unlike most SWE standards, the SWE-IoT defines a RESTful web interface for users to perform CRUD (i.e., create, read, update, and delete) functions on resources, including Things, Sensors, Actuators, Observations, Tasks, etc. Inspired by the OASIS Open Data Protocol (OData), the SWE-IoT web service provides the multi-faceted query, which means that users can query from different entity collections and link from one entity to other related entities. This presentation will introduce the latest development of the OGC SWE-IoT standards. Potential applications and implications in Earth and Space science will also be discussed. [1] Mike Botts, Sensor Web Enablement White Paper, Open GIS Consortium, Inc. 2002

  12. A price and performance comparison of three different storage architectures for data in cloud-based systems

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  13. A reliable user authentication and key agreement scheme for Web-based Hospital-acquired Infection Surveillance Information System.

    PubMed

    Wu, Zhen-Yu; Tseng, Yi-Ju; Chung, Yufang; Chen, Yee-Chun; Lai, Feipei

    2012-08-01

    With the rapid development of the Internet, both digitization and electronic orientation are required on various applications in the daily life. For hospital-acquired infection control, a Web-based Hospital-acquired Infection Surveillance System was implemented. Clinical data from different hospitals and systems were collected and analyzed. The hospital-acquired infection screening rules in this system utilized this information to detect different patterns of defined hospital-acquired infection. Moreover, these data were integrated into the user interface of a signal entry point to assist physicians and healthcare providers in making decisions. Based on Service-Oriented Architecture, web-service techniques which were suitable for integrating heterogeneous platforms, protocols, and applications, were used. In summary, this system simplifies the workflow of hospital infection control and improves the healthcare quality. However, it is probable for attackers to intercept the process of data transmission or access to the user interface. To tackle the illegal access and to prevent the information from being stolen during transmission over the insecure Internet, a password-based user authentication scheme is proposed for information integrity.

  14. Connecting the Science User With Science Data and Services via the BCube Broker and Crawler

    NASA Astrophysics Data System (ADS)

    Khalsa, S. J. S.

    2015-12-01

    BCube, an NSF EarthCube Building Block project, aims to improve efficiency and productivity of geoscientists who work across disciplinary boundaries by facilitating discovery and access of geoscience data, linking major data facilities, and creating a new paradigm for data advertising and discovery. In this presentation we describe the achievements to date of the BCube brokering and web crawling efforts, addressing focus areas and outcomes in both cyberinfrastructure and geocsciences. We describe science scenarios illustrating the research in the ocean, climate, hydrology and polar domains that is being supported by the BCube Broker. By providing mediation between a diversity of standards and protocols the broker makes it possible for the geoscientists working in these demains to discover and access high-value datasets via a cloud-based interface. An Accessor Development Kit (ADK) enables the community to develop modules that extend the capabilities of the broker. In addition, we describe the web crawling and semantic technologies that are being applied to find new geoscience-relevant datasets and services on the web.

  15. Global Federation of Data Services in Seismology: Extending the Concept to Interdisciplinary Science

    NASA Astrophysics Data System (ADS)

    Ahern, Tim; Trabant, Chad; Stults, Mike; VanFossen, Mick

    2016-04-01

    The International Federation of Digital Seismograph Networks (FDSN) sets international standards, formats, and access protocols for global seismology. Recently the availability of an FDSN standard for web services has enabled the development of a federated model of data access. With a growing number of internationally distributed data centers supporting compatible web services the task of federation is now fully realizable. The utility of this approach is already starting to bear fruit in seismology. This presentation will highlight the advances the seismological community has made in the past year towards federated access to seismological data including waveforms, earthquake event catalogs, and metadata describing seismic stations. It will include a discussion of an IRIS Federator as well as an emerging effort to develop an FDSN Federator that will allow seamless access to seismological information across multiple FDSN data centers. As part of the NSF EarthCube initiative as well as the US-European data coordination project (COOPEUS), IRIS and several partners, collectively called GeoWS, have been extending the concept of standard web services to other domains. Our primary partners include Lamont Doherty Earth Observatory (marine geophysics), Caltech (tectonic plate reconstructions), SDSC (hydrology), UNAVCO (geodesy), and Unidata (atmospheric sciences). Additionally, IRIS is working with partners at NOAA's National Centers for Environmental Information (NCEI) , NEON, UTEP, WOVOdat, INTERMAGNET, Global Geodynamics Program, and the Ocean Observatory Initiative (OOI) to develop web services for those domains. The ultimate goal is to allow discovery, access, and utilization of cross-domain data sources. One of the significant outcomes of this effort is the development of a simple text and metadata representation for tabular data called GeoCSV, that allows straightforward interpretation of information from multiple domains by non-domain experts.

  16. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  17. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  18. Development of a telemedicine model for emerging countries: a case study on pediatric oncology in Brazil.

    PubMed

    Hira, A Y; Nebel de Mello, A; Faria, R A; Odone Filho, V; Lopes, R D; Zuffo, M K

    2006-01-01

    This article discusses a telemedicine model for emerging countries, through the description of ONCONET, a telemedicine initiative applied to pediatric oncology in Brazil. The ONCONET core technology is a Web-based system that offers health information and other services specialized in childhood cancer such as electronic medical records and cooperative protocols for complex treatments. All Web-based services are supported by the use of high performance computing infrastructure based on clusters of commodity computers. The system was fully implemented on an open-source and free-software approach. Aspects of modeling, implementation and integration are covered. A model, both technologically and economically viable, was created through the research and development of in-house solutions adapted to the emerging countries reality and with focus on scalability both in the total number of patients and in the national infrastructure.

  19. Web Based Rapid Mapping of Disaster Areas using Satellite Images, Web Processing Service, Web Mapping Service, Frequency Based Change Detection Algorithm and J-iView

    NASA Astrophysics Data System (ADS)

    Bandibas, J. C.; Takarada, S.

    2013-12-01

    Timely identification of areas affected by natural disasters is very important for a successful rescue and effective emergency relief efforts. This research focuses on the development of a cost effective and efficient system of identifying areas affected by natural disasters, and the efficient distribution of the information. The developed system is composed of 3 modules which are the Web Processing Service (WPS), Web Map Service (WMS) and the user interface provided by J-iView (fig. 1). WPS is an online system that provides computation, storage and data access services. In this study, the WPS module provides online access of the software implementing the developed frequency based change detection algorithm for the identification of areas affected by natural disasters. It also sends requests to WMS servers to get the remotely sensed data to be used in the computation. WMS is a standard protocol that provides a simple HTTP interface for requesting geo-registered map images from one or more geospatial databases. In this research, the WMS component provides remote access of the satellite images which are used as inputs for land cover change detection. The user interface in this system is provided by J-iView, which is an online mapping system developed at the Geological Survey of Japan (GSJ). The 3 modules are seamlessly integrated into a single package using J-iView, which could rapidly generate a map of disaster areas that is instantaneously viewable online. The developed system was tested using ASTER images covering the areas damaged by the March 11, 2011 tsunami in northeastern Japan. The developed system efficiently generated a map showing areas devastated by the tsunami. Based on the initial results of the study, the developed system proved to be a useful tool for emergency workers to quickly identify areas affected by natural disasters.

  20. On the Nets. Comparing Web Browsers: Mosaic, Cello, Netscape, WinWeb and InternetWorks Life.

    ERIC Educational Resources Information Center

    Notess, Greg R.

    1995-01-01

    World Wide Web browsers are compared by speed, setup, hypertext transport protocol (HTTP) handling, management of file transfer protocol (FTP), telnet, gopher, and wide area information server (WAIS); bookmark options; and communication functions. Netscape has the most features, the fastest retrieval, sophisticated bookmark capabilities. (JMV)

  1. A site of communication among enterprises for supporting occupational health and safety management system.

    PubMed

    Velonakis, E; Mantas, J; Mavrikakis, I

    2006-01-01

    The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.

  2. Demonstrating NaradaBrokering as a Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. Here we demonstrate our approach - based on a distributed brokering infrastructure, NaradaBrokering [1] - that relies on distributed, asynchronous and loosely coupled interactions to meet the requirements and constraints of RVS. In our approach to RVS, services advertise their capabilities to the broker network that manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. We will demonstrate implementation of concepts that we outlined in the oral presentation. This would involve two or more visualization servers interacting asynchronously with multiple clients through NaradaBrokering. The communicating entities may exchange SOAP [2] (Simple Object Access Protocol) messages. SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that describes what is in a message and how to process it, rules for expressing instances of application-defined data types, and a convention for representing remote invocation related operations. Furthermore, we will also demonstrate how clients can retrieve their results after prolonged disconnects or after any failures that might have taken place. The entities, services and clients alike, are not limited by the geographical distances that separate them. We are planning to test this system in the context of trans-Atlantic links separating interacting entities. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org {[2]} Newcomer, E., 2002, Understanding web services: XML, WSDL, SOAP, and UDDI, Addison Wesley Professional.

  3. A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets

    ERIC Educational Resources Information Center

    Lee, Kevin M.; Nicoll, Gayle; Brooks, David W.

    2004-01-01

    This paper compares two protocols for web-based instruction using simulations in an introductory physics class. The Inquiry protocol allowed students to control input parameters while the Worked Example protocol did not. Students in the Worked Example group performed significantly higher on a common assessment. The ramifications of this study are…

  4. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation

    PubMed Central

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-01-01

    Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203

  5. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.

  6. Cultural and Contextual Adaptation of an eHealth Intervention for Youth Receiving Services for First-Episode Psychosis: Adaptation Framework and Protocol for Horyzons-Canada Phase 1.

    PubMed

    Lal, Shalini; Gleeson, John; Malla, Ashok; Rivard, Lysanne; Joober, Ridha; Chandrasena, Ranjith; Alvarez-Jimenez, Mario

    2018-04-23

    eHealth interventions have the potential to address challenges related to access, service engagement, and continuity of care in the delivery of mental health services. However, the initial development and evaluation of such interventions can require substantive amounts of financial and human resource investments to bring them to scale. Therefore, it may be warranted to pay greater attention to policy, services, and research with respect to eHealth platforms that have the potential to be adapted for use across settings. Yet, limited attention has been placed on the methods and processes for adapting eHealth interventions to improve their applicability across cultural, geographical, and contextual boundaries. In this paper, we describe an adaptation framework and protocol to adapt an eHealth intervention designed to promote recovery and prevent relapses in youth receiving specialized services for first-episode psychosis. The Web-based platform, called Horyzons, was initially developed and tested in Australia and is now being prepared for evaluation in Canada. Service users and service providers from 2 specialized early intervention programs for first-episode psychosis located in different provinces will explore a beta-version of the eHealth intervention through focus group discussions and extended personal explorations to identify the need for, and content of contextual and cultural adaptations. An iterative consultation process will then take place with service providers and users to develop and assess platform adaptations in preparation for a pilot study with a live version of the platform. Data collection was completed in August 2017, and analysis and adaptation are in process. The first results of the study will be submitted for publication in 2018 and will provide preliminary insights into the acceptability of the Web-based platform (eg, perceived use and perceived usefulness) from service provider and service user perspectives. The project will also provide knowledge about the adaptations and process needed to prepare the platform for evaluation in Canada. This study contributes to an important gap in the literature pertaining to the specific principles, methods, and steps involved in adapting eHealth interventions for implementation and evaluation across a diverse range of cultural, geographical, and health care settings. ©Shalini Lal, John Gleeson, Ashok Malla, Lysanne Rivard, Ridha Joober, Ranjith Chandrasena, Mario Alvarez-Jimenez. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 23.04.2018.

  7. EuroPhenome and EMPReSS: online mouse phenotyping resource

    PubMed Central

    Mallon, Ann-Marie; Hancock, John M.

    2008-01-01

    EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC). PMID:17905814

  8. EuroPhenome and EMPReSS: online mouse phenotyping resource.

    PubMed

    Mallon, Ann-Marie; Blake, Andrew; Hancock, John M

    2008-01-01

    EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC).

  9. Access and accounting schemes of wireless broadband

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Huang, Benxiong; Wang, Yan; Yu, Xing

    2004-04-01

    In this paper, two wireless broadband access and accounting schemes were introduced. There are some differences in the client and the access router module between them. In one scheme, Secure Shell (SSH) protocol is used in the access system. The SSH server makes the authentication based on private key cryptography. The advantage of this scheme is the security of the user's information, and we have sophisticated access control. In the other scheme, Secure Sockets Layer (SSL) protocol is used the access system. It uses the technology of public privacy key. Nowadays, web browser generally combines HTTP and SSL protocol and we use the SSL protocol to implement the encryption of the data between the clients and the access route. The schemes are same in the radius sever part. Remote Authentication Dial in User Service (RADIUS), as a security protocol in the form of Client/Sever, is becoming an authentication/accounting protocol for standard access to the Internet. It will be explained in a flow chart. In our scheme, the access router serves as the client to the radius server.

  10. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    NASA Technical Reports Server (NTRS)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  11. A web-based institutional DICOM distribution system with the integration of the Clinical Trial Processor (CTP).

    PubMed

    Aryanto, K Y E; Broekema, A; Langenhuysen, R G A; Oudkerk, M; van Ooijen, P M A

    2015-05-01

    To develop and test a fast and easy rule-based web-environment with optional de-identification of imaging data to facilitate data distribution within a hospital environment. A web interface was built using Hypertext Preprocessor (PHP), an open source scripting language for web development, and Java with SQL Server to handle the database. The system allows for the selection of patient data and for de-identifying these when necessary. Using the services provided by the RSNA Clinical Trial Processor (CTP), the selected images were pushed to the appropriate services using a protocol based on the module created for the associated task. Five pipelines, each performing a different task, were set up in the server. In a 75 month period, more than 2,000,000 images are transferred and de-identified in a proper manner while 20,000,000 images are moved from one node to another without de-identification. While maintaining a high level of security and stability, the proposed system is easy to setup, it integrate well with our clinical and research practice and it provides a fast and accurate vendor-neutral process of transferring, de-identifying, and storing DICOM images. Its ability to run different de-identification processes in parallel pipelines is a major advantage in both clinical and research setting.

  12. Standard Port-Visit Cost Forecasting Model for U.S. Navy Husbanding Contracts

    DTIC Science & Technology

    2009-12-01

    Protocol (HTTP) server.35 2. MySQL . An open-source database.36 3. PHP . A common scripting language used for Web development.37 E. IMPLEMENTATION OF...Inc. (2009). MySQL Community Server (Version 5.1) [Software]. Available from http://dev.mysql.com/downloads/ 37 The PHP Group (2009). PHP (Version...Logistics Services MySQL My Structured Query Language NAVSUP Navy Supply Systems Command NC Non-Contract Items NPS Naval Postgraduate

  13. Extending the Kerberos Protocol for Distributed Data as a Service

    DTIC Science & Technology

    2012-09-20

    exported as a UIMA [11] PEAR file for deployment to IBM Content Analytics (ICA). A UIMA PEAR file is a deployable text analytics “pipeline” (analogous...to a web application packaged in a WAR file). ICA is a text analysis and search application that supports UIMA . The key entities targeted by NLP rules...workbench. [Online]. Available: https: //www.ibm.com/developerworks/community/alphaworks/lrw/ [11] Apache UIMA . [Online]. Available: http

  14. Educational program for middle-level public health nurses to develop new health services regarding community health needs: protocol for a randomized controlled trial.

    PubMed

    Yoshioka-Maeda, Kyoko; Katayama, Takafumi; Shiomi, Misa; Hosoya, Noriko

    2018-01-01

    Developing health services is a key strategy for improving the community health provided by public health nurses. However, an effective educational program for improving their skills in planning such services has not been developed. To describe our program and its evaluation protocol for the education of middle-level public health nurses to improve their skills in developing new health services to fulfil community health needs in Japan. In this randomized control trial, eligible participants in Japan will be randomly allocated to an intervention group and a control wait-list group. We will provide 8 modules of web-based learning for public health nurses from July to October 2018. To ensure fairness of educational opportunity, the wait-list group will participate in the same program as the intervention group after collection of follow-up data of the intervention group. The primary outcomes will be evaluated using the scale of competency measurement of creativity for public health nurses at baseline, immediately after the intervention. Secondary outcomes will be knowledge and performance regarding program development of public health nurses. This study will enable the analysis of the effects of the educational program on public health nurses for improving their competency to develop new health services for fulfilling community health needs and enriching health care systems. We registered our study protocol to the University hospital Medical Information Network- Clinical Trials Registry approved by International Committee of Medical Journal Editors (No. UMIN000032176, April, 2018).

  15. Anon-Pass: Practical Anonymous Subscriptions

    PubMed Central

    Lee, Michael Z.; Dunn, Alan M.; Katz, Jonathan; Waters, Brent; Witchel, Emmett

    2014-01-01

    We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch. Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins. We find that a central tension in an anonymous subscription service is the service provider’s desire for a long epoch (to reduce server-side computation) versus users’ desire for a short epoch (so they can repeatedly “re-anonymize” their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period. We measure performance of a research prototype of our protocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user. PMID:24504081

  16. Anon-Pass: Practical Anonymous Subscriptions.

    PubMed

    Lee, Michael Z; Dunn, Alan M; Katz, Jonathan; Waters, Brent; Witchel, Emmett

    2013-12-31

    We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch . Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins. We find that a central tension in an anonymous subscription service is the service provider's desire for a long epoch (to reduce server-side computation) versus users' desire for a short epoch (so they can repeatedly "re-anonymize" their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period. We measure performance of a research prototype of our protocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user.

  17. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation.

    PubMed

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Rosser, B R Simon; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-04-24

    While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. This paper reports the challenges of survey validation inherent in a small Web-based health survey research. The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. ©James Dewitt, Benjamin Capistrant, Nidhi Kohli, B R Simon Rosser, Darryl Mitteldorf, Enyinnaya Merengwa, William West. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 24.04.2018.

  18. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  19. Modern Data Center Services Supporting Science

    NASA Astrophysics Data System (ADS)

    Varner, J. D.; Cartwright, J.; McLean, S. J.; Boucher, J.; Neufeld, D.; LaRocque, J.; Fischman, D.; McQuinn, E.; Fugett, C.

    2011-12-01

    The National Oceanic and Atmospheric Administration's National Geophysical Data Center (NGDC) World Data Center for Geophysics and Marine Geology provides scientific stewardship, products and services for geophysical data, including bathymetry, gravity, magnetics, seismic reflection, data derived from sediment and rock samples, as well as historical natural hazards data (tsunamis, earthquakes, and volcanoes). Although NGDC has long made many of its datasets available through map and other web services, it has now developed a second generation of services to improve the discovery and access to data. These new services use off-the-shelf commercial and open source software, and take advantage of modern JavaScript and web application frameworks. Services are accessible using both RESTful and SOAP queries as well as Open Geospatial Consortium (OGC) standard protocols such as WMS, WFS, WCS, and KML. These new map services (implemented using ESRI ArcGIS Server) are finer-grained than their predecessors, feature improved cartography, and offer dramatic speed improvements through the use of map caches. Using standards-based interfaces allows customers to incorporate the services without having to coordinate with the provider. Providing fine-grained services increases flexibility for customers building custom applications. The Integrated Ocean and Coastal Mapping program and Coastal and Marine Spatial Planning program are two examples of national initiatives that require common data inventories from multiple sources and benefit from these modern data services. NGDC is also consuming its own services, providing a set of new browser-based mapping applications which allow the user to quickly visualize and search for data. One example is a new interactive mapping application to search and display information about historical natural hazards. NGDC continues to increase the amount of its data holdings that are accessible and is augmenting the capabilities with modern web application frameworks such as Groovy and Grails. Data discovery is being improved and simplified by leveraging ISO metadata standards along with ESRI Geoportal Server.

  20. A Proxy Design to Leverage the Interconnection of CoAP Wireless Sensor Networks with Web Applications

    PubMed Central

    Ludovici, Alessandro; Calveras, Anna

    2015-01-01

    In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling. PMID:25585107

  1. Secure electronic commerce communication system based on CA

    NASA Astrophysics Data System (ADS)

    Chen, Deyun; Zhang, Junfeng; Pei, Shujun

    2001-07-01

    In this paper, we introduce the situation of electronic commercial security, then we analyze the working process and security for SSL protocol. At last, we propose a secure electronic commerce communication system based on CA. The system provide secure services such as encryption, integer, peer authentication and non-repudiation for application layer communication software of browser clients' and web server. The system can implement automatic allocation and united management of key through setting up the CA in the network.

  2. Current Experiences with Internet Telepathology and Possible Evolution in the Next Generation of Internet Services

    PubMed Central

    Della Mea, V.; Beltrami, C. A.

    2000-01-01

    The last five years experience has definitely demonstrated the possible applications of the Internet for telepathology. They may be listed as follows: (a) teleconsultation via multimedia e‐mail; (b) teleconsultation via web‐based tools; (c) distant education by means of World Wide Web; (d) virtual microscope management through Web and Java interfaces; (e) real‐time consultations through Internet‐based videoconferencing. Such applications have led to the recognition of some important limits of the Internet, when dealing with telemedicine: (i) no guarantees on the quality of service (QoS); (ii) inadequate security and privacy; (iii) for some countries, low bandwidth and thus low responsiveness for real‐time applications. Currently, there are several innovations in the world of the Internet. Different initiatives have been aimed at an amelioration of the Internet protocols, in order to have quality of service, multimedia support, security and other advanced services, together with greater bandwidth. The forthcoming Internet improvements, although induced by electronic commerce, video on demand, and other commercial needs, are of real interest also for telemedicine, because they solve the limits currently slowing down the use of Internet. When such new services will be available, telepathology applications may switch from research to daily practice in a fast way. PMID:11339559

  3. A randomised controlled trial of an online menu planning intervention to improve childcare service adherence to dietary guidelines: a study protocol.

    PubMed

    Yoong, Sze Lin; Grady, Alice; Wiggers, John; Flood, Victoria; Rissel, Chris; Finch, Meghan; Searles, Andrew; Salajan, David; O'Rourke, Ruby; Daly, Jaqueline; Gilham, Karen; Stacey, Fiona; Fielding, Alison; Pond, Nicole; Wyse, Rebecca; Seward, Kirsty; Wolfenden, Luke

    2017-09-11

    The implementation of dietary guidelines in childcare settings is recommended to improve child public health nutrition. However, foods provided in childcare services are not consistent with guidelines. The primary aim of the trial is to assess the effectiveness of a web-based menu planning intervention in increasing the mean number of food groups on childcare service menus that comply with dietary guidelines regarding food provision to children in care. A parallel group randomised controlled trial will be undertaken with 54 childcare services that provide food to children within New South Wales, Australia. Services will be randomised to a 12-month intervention or usual care. The experimental group will receive access to a web-based menu planning and decision support tool and online resources. To support uptake of the web program, services will be provided with training and follow-up support. The primary outcome will be the number of food groups, out of 6 (vegetables, fruit, breads and cereals, meat, dairy and 'discretionary'), on the menu that meet dietary guidelines (Caring for Children) across a 1-week menu at 12-month follow-up, assessed via menu review by dietitians or nutritionists blinded to group allocation. A nested evaluation of child dietary intake in care and child body mass index will be undertaken in up to 35 randomly selected childcare services and up to 420 children aged approximately 3-6 years. Ethical approval has been provided by Hunter New England and University of Newcastle Human Research Ethics Committees. This research will provide high-quality evidence regarding the impact of a web-based menu planning intervention in facilitating the translation of dietary guidelines into childcare services. Trial findings will be disseminated widely through national and international peer-reviewed publications and conference presentations. Prospectively registered with Australian New Zealand Clinical Trial Registry (ANZCTR) ACTRN12616000974404. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. [Use of Digital Health Records and "WebMovil" corporate service in the communication management of critical results of Microbiology, in the context of a primary health care area].

    PubMed

    Guzmán-González, A F; Navajas-Luque, F; de la Torre-Fernández, J

    2014-03-01

    The objective was to describe and evaluate a new communication protocol of reporting critical results applied to Microbiology in a health area of Andalusia. The size and type of the critical values of Microbiology are analyzed for primary care patients. A new computerized reporting system was analyzed, in real time, through Diraya Digital Health Records, which integrates the analytical test module (MPA). The protocol is complemented, in collaboration with the Information Technology (IT), with the Junta de Andalucía short message service (SMS) "WebMovil". The total number of notices of critical results by the new protocol in 2012 was 817. The number of critical values for primary care was 570, of which 90 were for Microbiology. The most frequent notice was by isolation in the stool culture (n = 51; 56.67%). The prevalence of the critical values of Microbiology in primary care was 0.45/100. The average time of notifications was 13 minutes. The success rate of notifications was 97.7% and 0% obtained in the number of withdrawals. In 99.93% of cases the contact with the patient was stated and in 98.55% the medical intervention was also confirmed. Communication by a computerized system linked to the SMS technology showed a reduction in the time of notification, and produced additional benefits, such as eliminating the risk of error when there is no repetition of information from the recipient received by the laboratory. Furthermore, the use of SMS messages ensures that doctors on duty always receive information immediately.

  5. Supporting NEESPI with Data Services - The SIB-ESS-C e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Gerlach, R.; Schmullius, C.; Frotscher, K.

    2009-04-01

    Data discovery and retrieval is commonly among the first steps performed for any Earth science study. The way scientific data is searched and accessed has changed significantly over the past two decades. Especially the development of the World Wide Web and the technologies that evolved along shortened the data discovery and data exchange process. On the other hand the amount of data collected and distributed by earth scientists has increased exponentially requiring new concepts for data management and sharing. One such concept to meet the demand is to build up Spatial Data Infrastructures (SDI) or e-Infrastructures. These infrastructures usually contain components for data discovery allowing users (or other systems) to query a catalogue or registry and retrieve metadata information on available data holdings and services. Data access is typically granted using FTP/HTTP protocols or, more advanced, through Web Services. A Service Oriented Architecture (SOA) approach based on standardized services enables users to benefit from interoperability among different systems and to integrate distributed services into their application. The Siberian Earth System Science Cluster (SIB-ESS-C) being established at the University of Jena (Germany) is such a spatial data infrastructure following these principles and implementing standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO). The prime objective is to provide researchers with focus on Siberia with the technical means for data discovery, data access, data publication and data analysis. The region of interest covers the entire Asian part of the Russian Federation from the Ural to the Pacific Ocean including the Ob-, Lena- and Yenissey river catchments. The aim of SIB-ESS-C is to provide a comprehensive set of data products for Earth system science in this region. Although SIB-ESS-C will be equipped with processing capabilities for in-house data generation (mainly from Earth Observation), current data holdings of SIB-ESS-C have been created in collaboration with a number of partners in previous and ongoing research projects (e.g. SIBERIA-II, SibFORD, IRIS). At the current development stage the SIB-ESS-C system comprises a federated metadata catalogue accessible through the SIB-ESS-C Web Portal or from any OGC-CSW compliant client. Due to full interoperability with other metadata catalogues users of the SIB-ESS-C Web Portal are able to search external metadata repositories. The Web Portal contains also a simple visualization component which will be extended to a comprehensive visualization and analysis tool in the near future. All data products are already accessible as a Web Mapping Service and will be made available as Web Feature and Web Coverage Services soon allowing users to directly incorporate the data into their application. The SIB-ESS-C infrastructure will be further developed as one node in a network of similar systems (e.g. NASA GIOVANNI) in the NEESPI region.

  6. Automatic meta-data collection of STP observation data

    NASA Astrophysics Data System (ADS)

    Ishikura, S.; Kimura, E.; Murata, K.; Kubo, T.; Shinohara, I.

    2006-12-01

    For the geo-science and the STP (Solar-Terrestrial Physics) studies, various observations have been done by satellites and ground-based observatories up to now. These data are saved and managed at many organizations, but no common procedure and rule to provide and/or share these data files. Researchers have felt difficulty in searching and analyzing such different types of data distributed over the Internet. To support such cross-over analyses of observation data, we have developed the STARS (Solar-Terrestrial data Analysis and Reference System). The STARS consists of client application (STARS-app), the meta-database (STARS- DB), the portal Web service (STARS-WS) and the download agent Web service (STARS DLAgent-WS). The STARS-DB includes directory information, access permission, protocol information to retrieve data files, hierarchy information of mission/team/data and user information. Users of the STARS are able to download observation data files without knowing locations of the files by using the STARS-DB. We have implemented the Portal-WS to retrieve meta-data from the meta-database. One reason we use the Web service is to overcome a variety of firewall restrictions which is getting stricter in recent years. Now it is difficult for the STARS client application to access to the STARS-DB by sending SQL query to obtain meta- data from the STARS-DB. Using the Web service, we succeeded in placing the STARS-DB behind the Portal- WS and prevent from exposing it on the Internet. The STARS accesses to the Portal-WS by sending the SOAP (Simple Object Access Protocol) request over HTTP. Meta-data is received as a SOAP Response. The STARS DLAgent-WS provides clients with data files downloaded from data sites. The data files are provided with a variety of protocols (e.g., FTP, HTTP, FTPS and SFTP). These protocols are individually selected at each site. The clients send a SOAP request with download request messages and receive observation data files as a SOAP Response with DIME-Attachment. By introducing the DLAgent-WS, we overcame the problem that the data management policies of each data site are independent. Another important issue to be overcome is how to collect the meta-data of observation data files. So far, STARS-DB managers have added new records to the meta-database and updated them manually. We have had a lot of troubles to maintain the meta-database because observation data are generated every day and the quantity of data files increases explosively. For that purpose, we have attempted to automate collection of the meta-data. In this research, we adopted the RSS 1.0 (RDF Site Summary) as a format to exchange meta-data in the STP fields. The RSS is an RDF vocabulary that provides a multipurpose extensible meta-data description and is suitable for syndication of meta-data. Most of the data in the present study are described in the CDF (Common Data Format), which is a self- describing data format. We have converted meta-information extracted from the CDF data files into RSS files. The program to generate the RSS files is executed on data site server once a day and the RSS files provide information of new data files. The RSS files are collected by RSS collection server once a day and the meta- data are stored in the STARS-DB.

  7. VO-Dance an IVOA tools to easy publish data into VO and it's extension on planetology request

    NASA Astrophysics Data System (ADS)

    Smareglia, R.; Capria, M. T.; Molinaro, M.

    2012-09-01

    Data publishing through the self standing portals can be joined to VO resource publishing, i.e. astronomical resources deployed through VO compliant services. Since the IVOA (International Virtual Observatory Alliance) provides many protocols and standards for the various data flavors (images, spectra, catalogues … ), and since the data center has as a goal to grow up in number of hosted archives and services providing, the idea arose to find a way to easily deploy and maintain VO resources. VO-Dance is a java web application developed at IA2 that addresses this idea creating, in a dynamical way, VO resources out of database tables or views. It is structured to be potentially DBMS and platform independent and consists of 3 main tokens, an internal DB to store resources description and model metadata information, a restful web application to deploy the resources to the VO community. It's extension to planetology request is under study to best effort INAF software development and archive efficiency.

  8. Google in a Quantum Network

    PubMed Central

    Paparo, G. D.; Martin-Delgado, M. A.

    2012-01-01

    We introduce the characterization of a class of quantum PageRank algorithms in a scenario in which some kind of quantum network is realizable out of the current classical internet web, but no quantum computer is yet available. This class represents a quantization of the PageRank protocol currently employed to list web pages according to their importance. We have found an instance of this class of quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the topology of the web. PMID:22685626

  9. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    PubMed

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  10. Flexible Web services integration: a novel personalised social approach

    NASA Astrophysics Data System (ADS)

    Metrouh, Abdelmalek; Mokhati, Farid

    2018-05-01

    Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.

  11. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  12. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    PubMed

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  13. Extending the GI Brokering Suite to Support New Interoperability Specifications

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by public administrations. CERIF: used by CRIS (Current Research Information System) instances. HYRAX Server: a scientific dataset publishing component. This presentation will discuss these and other latest GI suite extensions implemented to support new interoperability protocols in use by the Earth Science Communities.

  14. Virtual Observatory Interfaces to the Chandra Data Archive

    NASA Astrophysics Data System (ADS)

    Tibbetts, M.; Harbo, P.; Van Stone, D.; Zografou, P.

    2014-05-01

    The Chandra Data Archive (CDA) plays a central role in the operation of the Chandra X-ray Center (CXC) by providing access to Chandra data. Proprietary interfaces have been the backbone of the CDA throughout the Chandra mission. While these interfaces continue to provide the depth and breadth of mission specific access Chandra users expect, the CXC has been adding Virtual Observatory (VO) interfaces to the Chandra proposal catalog and observation catalog. VO interfaces provide standards-based access to Chandra data through simple positional queries or more complex queries using the Astronomical Data Query Language. Recent development at the CDA has generalized our existing VO services to create a suite of services that can be configured to provide VO interfaces to any dataset. This approach uses a thin web service layer for the individual VO interfaces, a middle-tier query component which is shared among the VO interfaces for parsing, scheduling, and executing queries, and existing web services for file and data access. The CXC VO services provide Simple Cone Search (SCS), Simple Image Access (SIA), and Table Access Protocol (TAP) implementations for both the Chandra proposal and observation catalogs within the existing archive architecture. Our work with the Chandra proposal and observation catalogs, as well as additional datasets beyond the CDA, illustrates how we can provide configurable VO services to extend core archive functionality.

  15. Development of wide area environment accelerator operation and diagnostics method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Akito; Furukawa, Kazuro

    2015-08-01

    Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

  16. Proposal for a Web Encoding Service (wes) for Spatial Data Transactio

    NASA Astrophysics Data System (ADS)

    Siew, C. B.; Peters, S.; Rahman, A. A.

    2015-10-01

    Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.

  17. Improving Service Management in the Internet of Things

    PubMed Central

    Sammarco, Chiara; Iera, Antonio

    2012-01-01

    In the Internet of Things (IoT) research arena, many efforts are devoted to adapt the existing IP standards to emerging IoT nodes. This is the direction followed by three Internet Engineering Task Force (IETF) Working Groups, which paved the way for research on IP-based constrained networks. Through a simplification of the whole TCP/IP stack, resource constrained nodes become direct interlocutors of application level entities in every point of the network. In this paper we analyze some side effects of this solution, when in the presence of large amounts of data to transmit. In particular, we conduct a performance analysis of the Constrained Application Protocol (CoAP), a widely accepted web transfer protocol for the Internet of Things, and propose a service management enhancement that improves the exploitation of the network and node resources. This is specifically thought for constrained nodes in the abovementioned conditions and proves to be able to significantly improve the node energetic performance when in the presence of large resource representations (hence, large data transmissions).

  18. Health status assessment via the World Wide Web.

    PubMed Central

    Bell, D. S.; Kahn, C. E.

    1996-01-01

    We explored the use of the World Wide Web to collect health status information for medical outcomes research. The RAND 36-Item Health Survey 1.0 (RAND-36), which contains the 36 multiple-choice questions of the Medical Outcomes Study SF-36 "Short Form" and differs only in its simplified scoring scheme, was made available for anonymous use on the Internet. Participation in the survey was invited through health-related Internet news groups and mailing lists. Participants entered data and received, their scores using the World Wide Web protocol. Entries were recorded from 15 June 1995 to 14 June 1996 (1 year). The survey was completed anonymously by 4876 individuals with access to the World Wide Web. Two-thirds completed the survey within 5 minutes, and 97% did so within 10 minutes. The item-completion rate was 99.28%. Values of Cronbach's alpha of 0.76 to 0.90 for the scoring scales matched the high reliability found in the Medical Outcomes Study. The World Wide Web provides a method of rapidly measuring individual health status and may play an important role in advancing health services research and outcomes-based patient care. PMID:8947684

  19. Emergency airway management in critically injured patients: a survey of U.S. aero-medical transport programs.

    PubMed

    James, Dorsha N; Voskresensky, Igor V; Jack, Meg; Cotton, Bryan A

    2009-06-01

    Pre-hospital airway management represents the intervention most likely to impact outcomes in critically injured patients. As such, airway management issues dominate quality improvement (QI) reviews of aero-medical programs. The purpose of this study was to evaluate current practice patterns of airway management in trauma among U.S. aero-medical service (AMS) programs. The Association of Air Medical Services (AAMS) Resource Guide from 2005 to 2006 was utilized to identify the e-mail addresses of all directors of U.S. aero-medical transport programs. Program directors from 182 U.S. aero-medical programs were asked to participate in an anonymous, web-based survey of emergency airway management protocols and practices. Non-responders to the initial request were contacted a second time by e-mail. 89 programs responded. 98.9% have rapid sequence intubation (RSI) protocols. 90% use succinylcholine, 70% use long-acting neuromuscular blockers (NMB) within their RSI protocol. 77% have protocols for mandatory in-flight sedation but only 13% have similar protocols for maintenance paralytics. 60% administer long-acting NMB immediately after RSI, 13% after confirmation of neurological activity. Given clinical scenarios, however, 97% administer long-acting NMB to patients with scene and in-flight Glasgow Coma Scale (GCS) of 3, even for brief transport times. The majority of AMS programs have well defined RSI and in-flight sedation protocols, while protocols for in-flight NMB are uncommon. Despite this, nearly all programs administer long-acting NMB following RSI, irrespective of GCS or flight time. Given the impact of in-flight NMB on initial assessment, early intervention, and injury severity scoring, a critical appraisal of current AMS airway management practices appears warranted.

  20. CartograTree: connecting tree genomes, phenotypes and environment.

    PubMed

    Vasquez-Gross, Hans A; Yu, John J; Figueroa, Ben; Gessler, Damian D G; Neale, David B; Wegrzyn, Jill L

    2013-05-01

    Today, researchers spend a tremendous amount of time gathering, formatting, filtering and visualizing data collected from disparate sources. Under the umbrella of forest tree biology, we seek to provide a platform and leverage modern technologies to connect biotic and abiotic data. Our goal is to provide an integrated web-based workspace that connects environmental, genomic and phenotypic data via geo-referenced coordinates. Here, we connect the genomic query web-based workspace, DiversiTree and a novel geographical interface called CartograTree to data housed on the TreeGenes database. To accomplish this goal, we implemented Simple Semantic Web Architecture and Protocol to enable the primary genomics database, TreeGenes, to communicate with semantic web services regardless of platform or back-end technologies. The novelty of CartograTree lies in the interactive workspace that allows for geographical visualization and engagement of high performance computing (HPC) resources. The application provides a unique tool set to facilitate research on the ecology, physiology and evolution of forest tree species. CartograTree can be accessed at: http://dendrome.ucdavis.edu/cartogratree. © 2013 Blackwell Publishing Ltd.

  1. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources.

    PubMed

    Cruz-Piris, Luis; Rivera, Diego; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2018-03-20

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.

  2. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources

    PubMed Central

    2018-01-01

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal. PMID:29558406

  3. Expert Advisor (EA) Evaluation System Using Web-based ELECTRE Method in Foreign Exchange (Forex) Market

    NASA Astrophysics Data System (ADS)

    Satibi; Widodo, Catur Edi; Farikhin

    2018-02-01

    This research aims to optimize forex trading profit automatically using EA but its still keep considering accuracy and drawdown levels. The evaluation system will classify EA performance based on trading market sessions (Sydney, Tokyo, London and New York) to determine the right EA to be used in certain market sessions. This evaluation system is a web-based ELECTRE methods that interact in real-time with EA through web service and are able to present real-time charts performance dashboard using web socket protocol communications. Web applications are programmed using NodeJs. In the testing period, all EAs had been simulated 24 hours in all market sessions for three months, the best EA is valued by its profit, accuracy and drawdown criteria that calculated using web-based ELECTRE method. The ideas of this research are to compare the best EA on testing period with collaboration performances of each best classified EA by market sessions. This research uses three months historical data of EUR/USD as testing period and other 3 months as validation period. As a result, performance of collaboration four best EA classified by market sessions can increase profits percentage consistently in testing and validation periods and keep securing accuracy and drawdown levels.

  4. A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.

    PubMed

    Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J

    2018-06-27

    This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.

  5. High Availability Applications for NOMADS at the NOAA Web Operations Center Aimed at Providing Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.

    2009-05-01

    The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.

  6. The GEOSS Clearinghouse based on the GeoNetwork opensource

    NASA Astrophysics Data System (ADS)

    Liu, K.; Yang, C.; Wu, H.; Huang, Q.

    2010-12-01

    The Global Earth Observation System of Systems (GEOSS) is established to support the study of the Earth system in a global community. It provides services for social management, quick response, academic research, and education. The purpose of GEOSS is to achieve comprehensive, coordinated and sustained observations of the Earth system, improve monitoring of the state of the Earth, increase understanding of Earth processes, and enhance prediction of the behavior of the Earth system. In 2009, GEO called for a competition for an official GEOSS clearinghouse to be selected as a source to consolidating catalogs for Earth observations. The Joint Center for Intelligent Spatial Computing at George Mason University worked with USGS to submit a solution based on the open-source platform - GeoNetwork. In the spring of 2010, the solution is selected as the product for GEOSS clearinghouse. The GEOSS Clearinghouse is a common search facility for the Intergovernmental Group on Ea rth Observation (GEO). By providing a list of harvesting functions in Business Logic, GEOSS clearinghouse can collect metadata from distributed catalogs including other GeoNetwork native nodes, webDAV/sitemap/WAF, catalog services for the web (CSW)2.0, GEOSS Component and Service Registry (http://geossregistries.info/), OGC Web Services (WCS, WFS, WMS and WPS), OAI Protocol for Metadata Harvesting 2.0, ArcSDE Server and Local File System. Metadata in GEOSS clearinghouse are managed in a database (MySQL, Postgresql, Oracle, or MckoiDB) and an index of the metadata is maintained through Lucene engine. Thus, EO data, services, and related resources can be discovered and accessed. It supports a variety of geospatial standards including CSW and SRU for search, FGDC and ISO metadata, and WMS related OGC standards for data access and visualization, as linked from the metadata.

  7. Interfaces to PeptideAtlas: a case study of standard data access systems

    PubMed Central

    Handcock, Jeremy; Robinson, Thomas; Deutsch, Eric W.; Boyle, John

    2012-01-01

    Access to public data sets is important to the scientific community as a resource to develop new experiments or validate new data. Projects such as the PeptideAtlas, Ensembl and The Cancer Genome Atlas (TCGA) offer both access to public data and a repository to share their own data. Access to these data sets is often provided through a web page form and a web service API. Access technologies based on web protocols (e.g. http) have been in use for over a decade and are widely adopted across the industry for a variety of functions (e.g. search, commercial transactions, and social media). Each architecture adapts these technologies to provide users with tools to access and share data. Both commonly used web service technologies (e.g. REST and SOAP), and custom-built solutions over HTTP are utilized in providing access to research data. Providing multiple access points ensures that the community can access the data in the simplest and most effective manner for their particular needs. This article examines three common access mechanisms for web accessible data: BioMart, caBIG, and Google Data Sources. These are illustrated by implementing each over the PeptideAtlas repository and reviewed for their suitability based on specific usages common to research. BioMart, Google Data Sources, and caBIG are each suitable for certain uses. The tradeoffs made in the development of the technology are dependent on the uses each was designed for (e.g. security versus speed). This means that an understanding of specific requirements and tradeoffs is necessary before selecting the access technology. PMID:22941959

  8. A transmission security framework for email-based telemedicine.

    PubMed

    Caffery, Liam J; Smith, Anthony C

    2010-01-01

    Encryption is used to convert an email message to an unreadable format thereby securing patient privacy during the transmission of the message across the Internet. Two available means of encryption are: public key infrastructure (PKI) used in conjunction with ordinary email and secure hypertext transfer protocol (HTTPS) used by secure web-mail applications. Both of these approaches have advantages and disadvantages in terms of viability, cost, usability and compliance. The aim of this study was develop an instrument to identify the most appropriate means of encrypting email communication for telemedicine. A multi-method approach was used to construct the instrument. Technical assessment and existing bodies of knowledge regarding the utility of PKI were analyzed, along with survey results from users of Queensland Health's Child and Youth Mental Health Service secure web-mail service. The resultant decision support model identified that the following conditions affect the choice of encryption technology: correspondent's risk perception, correspondent's identification to the security afforded by encryption, email-client used by correspondents, the tolerance to human error and the availability of technical resources. A decision support model is presented as a flow chart to identify the most appropriate encryption for a specific email-based telemedicine service.

  9. Leveraging Globus to Support Access and Delivery of Scientific Data

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2015-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2014, 11,000 unique users downloaded greater than 1.1 petabytes of data from the RDA, and customized data products were prepared for more than 45,000 user-driven requests. In order to further support this increase in web download usage, the RDA has implemented the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for the research community. This presentation will highlight the technical functionality, challenges, and usefulness of the Globus data transfer service for accessing the RDA data holdings.

  10. The Joy of Playing with Oceanographic Data

    NASA Astrophysics Data System (ADS)

    Smith, A. T.; Xing, Z.; Armstrong, E. M.; Thompson, C. K.; Huang, T.

    2013-12-01

    The web is no longer just an after thought. It is no longer just a presentation layer filled with HTML, CSS, JavaScript, Frameworks, 3D, and more. It has become the medium of our communication. It is the database of all databases. It is the computing platform of all platforms. It has transformed the way we do science. Web service is the de facto method for communication between machines over the web. Representational State Transfer (REST) has standardized the way we architect services and their interfaces. In the Earth Science domain, we are familiar with tools and services such as Open-Source Project for Network Data Access Protocol (OPeNDAP), Thematic Realtime Environmental Distributed Data Services (THREDDS), and Live Access Server (LAS). We are also familiar with various data formats such as NetCDF3/4, HDF4/5, GRIB, TIFF, etc. One of the challenges for the Earth Science community is accessing information within these data. There are community-accepted readers that our users can download and install. However, the Application Programming Interface (API) between these readers is not standardized, which leads to non-portable applications. Webification (w10n) is an emerging technology, developed at the Jet Propulsion Laboratory, which exploits the hierarchical nature of a science data artifact to assign a URL to each element within the artifact. (e.g. a granule file). By embracing standards such as JSON, XML, and HTML5 and predictable URL, w10n provides a simple interface that enables tool-builders and researchers to develop portable tools/applications to interact with artifacts of various formats. The NASA Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is the designated data center for observational products relevant to the physical state of the ocean. Over the past year PO.DAAC has been evaluating w10n technology by webifying its archive holdings to provide simplified access to oceanographic science artifacts and as a service to enable future tools and services development. In this talk, we will focus on a w10n-based system called Distributed Oceanographic Webification Service (DOWS) being developed at PO.DAAC to provide a newer and simpler method for working with observational data artifacts. As a continued effort at PO.DAAC to provide better tools and services to visualize our data, the talk will discuss the latest in web-based data visualization tools/frameworks (such as d3.js, Three.js, Leaflet.js, and more) and techniques for working with webified oceanographic science data in both a 2D and 3D web approach.

  11. A low-cost mobile adaptive tracking system for chronic pulmonary patients in home environment.

    PubMed

    Işik, Ali Hakan; Güler, Inan; Sener, Melahat Uzel

    2013-01-01

    The main objective of this study is presenting a real-time mobile adaptive tracking system for patients diagnosed with diseases such as asthma or chronic obstructive pulmonary disease and application results at home. The main role of the system is to support and track chronic pulmonary patients in real time who are comfortable in their home environment. It is not intended to replace the doctor, regular treatment, and diagnosis. In this study, the Java 2 micro edition-based system is integrated with portable spirometry, smartphone, extensible markup language-based Web services, Web server, and Web pages for visualizing pulmonary function test results. The Bluetooth(®) (Bluetooth SIG, Kirkland, WA) virtual serial port protocol is used to obtain the test results from spirometry. General packet radio service, wireless local area network, or third-generation-based wireless networks are used to send the test results from a smartphone to the remote database. The system provides real-time classification of test results with the back propagation artificial neural network algorithm on a mobile smartphone. It also provides the generation of appropriate short message service-based notification and sending of all data to the Web server. In this study, the test results of 486 patients, obtained from Atatürk Chest Diseases and Thoracic Surgery Training and Research Hospital in Ankara, Turkey, are used as the training and test set in the algorithm. The algorithm has 98.7% accuracy, 97.83% specificity, 97.63% sensitivity, and 0.946 correlation values. The results show that the system is cheap (900 Euros) and reliable. The developed real-time system provides improvement in classification accuracy and facilitates tracking of chronic pulmonary patients.

  12. New NED XML/VOtable Services and Client Interface Applications

    NASA Astrophysics Data System (ADS)

    Pevunova, O.; Good, J.; Mazzarella, J.; Berriman, G. B.; Madore, B.

    2005-12-01

    The NASA/IPAC Extragalactic Database (NED) provides data and cross-identifications for over 7 million extragalactic objects fused from thousands of survey catalogs and journal articles. The data cover all frequencies from radio through gamma rays and include positions, redshifts, photometry and spectral energy distributions (SEDs), sizes, and images. NED services have traditionally supplied data in HTML format for connections from Web browsers, and a custom ASCII data structure for connections by remote computer programs written in the C programming language. We describe new services that provide responses from NED queries in XML documents compliant with the international virtual observatory VOtable protocol. The XML/VOtable services support cone searches, all-sky searches based on object attributes (survey names, cross-IDs, redshifts, flux densities), and requests for detailed object data. Initial services have been inserted into the NVO registry, and others will follow soon. The first client application is a Style Sheet specification for rendering NED VOtable query results in Web browsers that support XML. The second prototype application is a Java applet that allows users to compare multiple SEDs. The new XML/VOtable output mode will also simplify the integration of data from NED into visualization and analysis packages, software agents, and other virtual observatory applications. We show an example SED from NED plotted using VOPlot. The NED website is: http://nedwww.ipac.caltech.edu.

  13. RSAT 2015: Regulatory Sequence Analysis Tools

    PubMed Central

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-01-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632

  14. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    PubMed

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  15. Interoperability at ESA Heliophysics Science Archives: IVOA, HAPI and other implementations

    NASA Astrophysics Data System (ADS)

    Martinez-Garcia, B.; Cook, J. P.; Perez, H.; Fernandez, M.; De Teodoro, P.; Osuna, P.; Arnaud, M.; Arviset, C.

    2017-12-01

    The data of ESA heliophysics science missions are preserved at the ESAC Science Data Centre (ESDC). The ESDC aims for the long term preservation of those data, which includes missions such as Ulysses, Soho, Proba-2, Cluster, Double Star, and in the future, Solar Orbiter. Scientists have access to these data through web services, command line and graphical user interfaces for each of the corresponding science mission archives. The International Virtual Observatory Alliance (IVOA) provides technical standards that allow interoperability among different systems that implement them. By adopting some IVOA standards, the ESA heliophysics archives are able to share their data with those tools and services that are VO-compatible. Implementation of those standards can be found in the existing archives: Ulysses Final Archive (UFA) and Soho Science Archive (SSA). They already make use of VOTable format definition and Simple Application Messaging Protocol (SAMP). For re-engineered or new archives, the implementation of services through Table Access Protocol (TAP) or Universal Worker Service (UWS) will leverage this interoperability. This will be the case for the Proba-2 Science Archive (P2SA) and the Solar Orbiter Archive (SOAR). We present here which IVOA standards were already used by the ESA Heliophysics archives in the past and the work on-going.

  16. Interoperability science cases with the CDPP tools

    NASA Astrophysics Data System (ADS)

    Nathanaël, J.; Cecconi, B.; André, N.; Bouchemit, M.; Gangloff, M.; Budnik, E.; Jacquey, C.; Pitout, F.; Durand, J.; Rouillard, A.; Lavraud, B.; Genot, V. N.; Popescu, D.; Beigbeder, L.; Toniutti, J. P.; Caussarieu, S.

    2017-12-01

    Data exchange protocols are never as efficient as when they are invisible for the end user who is then able to discover data, to cross compare observations and modeled data and finally to perform in depth analysis. Over the years these protocols, including SAMP from IVOA, EPN-TAP from the Europlanet 2020 RI community, backed by standard web-services, have been deployed in tools designed by the French Centre de Données de la Physique des Plasmas (CDPP) including AMDA, the Propagation Tool, 3DView, ... . This presentation will focus on science cases which show the capability of interoperability in the planetary and heliophysics contexts, involving both CDPP and companion tools. Europlanet 2020 RI has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 654208.

  17. Web processing service for climate impact and extreme weather event analyses. Flyingpigeon (Version 1.0)

    NASA Astrophysics Data System (ADS)

    Hempelmann, Nils; Ehbrecht, Carsten; Alvarez-Castro, Carmen; Brockmann, Patrick; Falk, Wolfgang; Hoffmann, Jörg; Kindermann, Stephan; Koziol, Ben; Nangini, Cathy; Radanovics, Sabine; Vautard, Robert; Yiou, Pascal

    2018-01-01

    Analyses of extreme weather events and their impacts often requires big data processing of ensembles of climate model simulations. Researchers generally proceed by downloading the data from the providers and processing the data files ;at home; with their own analysis processes. However, the growing amount of available climate model and observation data makes this procedure quite awkward. In addition, data processing knowledge is kept local, instead of being consolidated into a common resource of reusable code. These drawbacks can be mitigated by using a web processing service (WPS). A WPS hosts services such as data analysis processes that are accessible over the web, and can be installed close to the data archives. We developed a WPS named 'flyingpigeon' that communicates over an HTTP network protocol based on standards defined by the Open Geospatial Consortium (OGC), to be used by climatologists and impact modelers as a tool for analyzing large datasets remotely. Here, we present the current processes we developed in flyingpigeon relating to commonly-used processes (preprocessing steps, spatial subsets at continent, country or region level, and climate indices) as well as methods for specific climate data analysis (weather regimes, analogues of circulation, segetal flora distribution, and species distribution models). We also developed a novel, browser-based interactive data visualization for circulation analogues, illustrating the flexibility of WPS in designing custom outputs. Bringing the software to the data instead of transferring the data to the code is becoming increasingly necessary, especially with the upcoming massive climate datasets.

  18. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  19. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  20. BioData: a national aquatic bioassessment database

    USGS Publications Warehouse

    MacCoy, Dorene

    2011-01-01

    BioData is a U.S. Geological Survey (USGS) web-enabled database that for the first time provides for the capture, curation, integration, and delivery of bioassessment data collected by local, regional, and national USGS projects. BioData offers field biologists advanced capabilities for entering, editing, and reviewing the macroinvertebrate, algae, fish, and supporting habitat data from rivers and streams. It offers data archival and curation capabilities that protect and maintain data for the long term. BioData provides the Federal, State, and local governments, as well as the scientific community, resource managers, the private sector, and the public with easy access to tens of thousands of samples collected nationwide from thousands of stream and river sites. BioData also provides the USGS with centralized data storage for delivering data to other systems and applications through automated web services. BioData allows users to combine data sets of known quality from different projects in various locations over time. It provides a nationally aggregated database for users to leverage data from many independent projects that, until now, was not feasible at this scale. For example, from 1991 to 2011, the USGS Idaho Water Science Center collected more than 816 bioassessment samples from 63 sites for the National Water Quality Assessment (NAWQA) Program and more than 477 samples from 39 sites for a cooperative USGS and State of Idaho Statewide Water Quality Network (fig. 1). Using BioData, 20 years of samples collected for both of these projects can be combined for analysis. BioData delivers all of the data using current taxonomic nomenclature, thus relieving users of the difficult and time-consuming task of harmonizing taxonomy among samples collected during different time periods. Fish data are reported using the Integrated Taxonomic Information Service (ITIS) Taxonomic Serial Numbers (TSN's). A simple web-data input interface and self-guided, public data-retrieval web site provides access to bioassessment data. BioData currently accepts data collected using two national protocols: (1) NAWQA and (2) U.S. Environmental Protection Agency (USEPA) National Rivers and Streams Assessment (NRSA). Additional collection protocols are planned for future versions.

  1. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  2. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  3. Cervical Spine Clearance in Pediatric Trauma Centers: The Need for Standardization and an Evidence-based Protocol.

    PubMed

    Pannu, Gurpal S; Shah, Mitesh P; Herman, Marty J

    Cervical spine clearance in the pediatric trauma patient represents a particularly challenging task. Unfortunately, standardized clearance protocols for pediatric cervical clearance are poorly reported in the literature and imaging recommendations demonstrate considerable variability. With the use of a web-based survey, this study aims to define the methods utilized by pediatric trauma centers throughout North America. Specific attention was given to the identification of personnel responsible for cervical spine care, diagnostic imaging modalities used, and the presence or absence of a written pediatric cervical spine clearance protocol. A 10-question electronic survey was given to members of the newly formed Pediatric Cervical Spine Study Group, all of whom are active POSNA members. The survey was submitted via the online service SurveyMonkey (https://www.surveymonkey.com/r/7NVVQZR). The survey assessed the respondent's institution demographics, such as trauma level and services primarily responsible for consultation and operative management of cervical spine injuries. In addition, respondents were asked to identify the protocols and primary imaging modality used for cervical spine clearance. Finally, respondents were asked if their institution had a documented cervical spine clearance protocol. Of the 25 separate institutions evaluated, 21 were designated as level 1 trauma centers. Considerable variation was reported with regards to the primary service responsible for cervical spine clearance. General Surgery/Trauma (44%) is most commonly the primary service, followed by a rotating schedule (33%), Neurosugery (11%), and Orthopaedic Surgery (8%). Spine consults tend to be seen most commonly by a rotating schedule of Orthopaedic Surgery and Neurosurgery. The majority of responding institutions utilize computed tomographic imaging (46%) as the primary imaging modality, whereas 42% of hospitals used x-ray primarily. The remaining institutions reported using a combination of x-ray and computed tomographic imaging. Only 46% of institutions utilize a written, standardized pediatric cervical spine clearance protocol. This study demonstrates a striking variability in the use of personnel, imaging modalities and, most importantly, standardized protocol in the evaluation of the pediatric trauma patient with a potential cervical spine injury. Cervical spine clearance protocols have been shown to decrease the incidence of missed injuries, minimize excessive radiation exposure, decrease the time to collar removal, and lower overall associated costs. It is our opinion that development of a task force or multicenter research protocol that incorporates existing evidence-based literature is the next best step in improving the care of children with cervical spine injuries. Level 4-economic and decision analyses.

  4. Providing Multi-Page Data Extraction Services with XWRAPComposer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ling; Zhang, Jianjun; Han, Wei

    2008-04-30

    Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  5. An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services

    NASA Astrophysics Data System (ADS)

    Shah, M.; Verma, Y.; Nandakumar, R.

    2012-07-01

    Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.

  6. QBCov: A Linked Data interface for Discrete Global Grid Systems, a new approach to delivering coverage data on the web

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Toyer, S.; Brizhinev, D.; Ledger, M.; Taylor, K.; Purss, M. B. J.

    2016-12-01

    We are witnessing a rapid proliferation of geoscientific and geospatial data from an increasing variety of sensors and sensor networks. This data presents great opportunities to resolve cross-disciplinary problems. However, working with it often requires an understanding of file formats and protocols seldom used outside of scientific computing, potentially limiting the data's value to other disciplines. In this paper, we present a new approach to serving satellite coverage data on the web, which improves ease-of-access using the principles of linked data. Linked data adapts the concepts and protocols of the human-readable web to machine-readable data; the number of developers familiar with web technologies makes linked data a natural choice for bringing coverages to a wider audience. Our approach to using linked data also makes it possible to efficiently service high-level SPARQL queries: for example, "Retrieve all Landsat ETM+ observations of San Francisco between July and August 2016" can easily be encoded in a single query. We validate the new approach, which we call QBCov, with a reference implementation of the entire stack, including a simple web-based client for interacting with Landsat observations. In addition to demonstrating the utility of linked data for publishing coverages, we investigate the heretofore unexplored relationship between Discrete Global Grid Systems (DGGS) and linked data. Our conclusions are informed by the aforementioned reference implementation of QBCov, which is backed by a hierarchical file format designed around the rHEALPix DGGS. Not only does the choice of a DGGS-based representation provide an efficient mechanism for accessing large coverages at multiple scales, but the ability of DGGS to produce persistent, unique identifiers for spatial regions is especially valuable in a linked data context. This suggests that DGGS has an important role to play in creating sustainable and scalable linked data infrastructures. QBCov is being developed as a contribution to the Spatial Data on the Web working group--a joint activity of the Open Geospatial Consortium and World Wide Web Consortium.

  7. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    NASA Astrophysics Data System (ADS)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    Geo-hazards and their effects are distributed geographically over wide regions. The effective mapping and monitoring is essential for hazard assessment and mitigation. It is often best achieved using satellite imagery and new object-based image analysis approaches to identify and delineate geo-hazard objects (landslides, floods, forest fires, storm damages, etc.). At the moment, several local/national databases and platforms provide and publish data of different types of geo-hazards as well as web-based risk maps and decision support systems. Also, the European commission implemented the Copernicus Emergency Management Service (EMS) in 2015 that publishes information about natural and man-made disasters and risks. Currently, no platform for landslides or geo-hazards as such exists that enables the integration of the user in the mapping and monitoring process. In this study we introduce the concept of a spatial data infrastructure for object delineation, web-processing and service provision of landslide information with the focus on user interaction in all processes. A first prototype for the processing and mapping of landslides in Austria and Italy has been developed within the project Land@Slide, funded by the Austrian Research Promotion Agency FFG in the Austrian Space Applications Program ASAP. The spatial data infrastructure and its services for the mapping, processing and analysis of landslides can be extended to other regions and to all types of geo-hazards for analysis and delineation based on Earth Observation (EO) data. The architecture of the first prototypical spatial data infrastructure includes four main areas of technical components. The data tier consists of a file storage system and the spatial data catalogue for the management of EO-data, other geospatial data on geo-hazards, as well as descriptions and protocols for the data processing and analysis. An interface to extend the data integration from external sources (e.g. Sentinel-2 data) is planned for the possibility of rapid mapping. The server tier consists of java based web and GIS server. Sub and main services are part of the service tier. Sub services are for example map services, feature editing services, geometry services, geoprocessing services and metadata services. For (meta)data provision and to support data interoperability, web standards of the OGC and the rest-interface is used. Four central main services are designed and developed: (1) a mapping service (including image segmentation and classification approaches), (2) a monitoring service to monitor changes over time, (3) a validation service to analyze landslide delineations from different sources and (4) an infrastructure service to identify affected landslides. The main services use and combine parts of the sub services. Furthermore, a series of client applications based on new technology standards making use of the data and services offered by the spatial data infrastructure. Next steps include the design to extend the current spatial data infrastructure to other areas and geo-hazard types to develop a spatial data infrastructure that can assist targeted mapping and monitoring of geo-hazards on a global context.

  8. MedlinePlus Connect: Web Service

    MedlinePlus

    ... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service ...

  9. Focused Crawling of the Deep Web Using Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  10. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  11. A web-based intervention to promote applications for rehabilitation: a study protocol for a randomized controlled trial.

    PubMed

    Spanier, Katja; Streibelt, Marco; Ünalan, Firat; Bethge, Matthias

    2015-09-29

    The German welfare system follows the principle "rehabilitation rather than pension," but more than the half of all disability pensioners did not utilize medical rehabilitation before their early retirement. A major barrier is the application procedure. Lack of information about the opportunity to utilize rehabilitation services restricts the chance to improve work ability and to prevent health-related early retirement by rehabilitation programs. The establishment of new access paths to medical rehabilitation services was, therefore, identified as a major challenge for rehabilitation research in a recent expertise. Thus, a web-based information guide was developed to support the application for a medical rehabilitation program. For this study, the development of a web-based information guide was based on the health action process approach. Four modules were established. Three modules support forming an intention by strengthening risk perception (module 1), positive outcome expectancies (module 2) and self-efficacy (module 3). A fourth module aims at the realization of actual behavior by offering instructions on how to plan and to push the application process. The study on the effectiveness of the web-based information guide will be performed as a randomized controlled trial. Persons aged 40 to 59 years with prior sick leave benefits during the preceding year will be included. A sample of 16,000 persons will be randomly drawn from the registers of 3 pension insurance agencies. These persons will receive a questionnaire to determine baseline characteristics. Respondents of this first survey will be randomly allocated either to the intervention or the control group. Both study groups will then receive letters with general information about rehabilitation. The intervention group will additionally receive a link to the web-based information guide. After 1 year, a second survey will be conducted. Additionally, administrative data will be used to determine if participants apply for rehabilitation and finally start a rehabilitation program. The primary outcomes are the proportion of applied and utilized medical rehabilitation services. Secondary outcomes are cognitions on rehabilitation, self-rated work ability, health-related quality of life and perceived disability, as well as days with sick leave benefits and days of regular employment. The randomized controlled trial will provide highest ranked evidence to clarify whether theory-driven web-based information supports access to rehabilitation services for people with prior sickness benefits. German Clinical Trials Register (Identifier: DRKS00005658 , 16 January 2014).

  12. OpenFlyData: an exemplar data web integrating gene expression data on the fruit fly Drosophila melanogaster.

    PubMed

    Miles, Alistair; Zhao, Jun; Klyne, Graham; White-Cooper, Helen; Shotton, David

    2010-10-01

    Integrating heterogeneous data across distributed sources is a major requirement for in silico bioinformatics supporting translational research. For example, genome-scale data on patterns of gene expression in the fruit fly Drosophila melanogaster are widely used in functional genomic studies in many organisms to inform candidate gene selection and validate experimental results. However, current data integration solutions tend to be heavy weight, and require significant initial and ongoing investment of effort. Development of a common Web-based data integration infrastructure (a.k.a. data web), using Semantic Web standards, promises to alleviate these difficulties, but little is known about the feasibility, costs, risks or practical means of migrating to such an infrastructure. We describe the development of OpenFlyData, a proof-of-concept system integrating gene expression data on D. melanogaster, combining Semantic Web standards with light-weight approaches to Web programming based on Web 2.0 design patterns. To support researchers designing and validating functional genomic studies, OpenFlyData includes user-facing search applications providing intuitive access to and comparison of gene expression data from FlyAtlas, the BDGP in situ database, and FlyTED, using data from FlyBase to expand and disambiguate gene names. OpenFlyData's services are also openly accessible, and are available for reuse by other bioinformaticians and application developers. Semi-automated methods and tools were developed to support labour- and knowledge-intensive tasks involved in deploying SPARQL services. These include methods for generating ontologies and relational-to-RDF mappings for relational databases, which we illustrate using the FlyBase Chado database schema; and methods for mapping gene identifiers between databases. The advantages of using Semantic Web standards for biomedical data integration are discussed, as are open issues. In particular, although the performance of open source SPARQL implementations is sufficient to query gene expression data directly from user-facing applications such as Web-based data fusions (a.k.a. mashups), we found open SPARQL endpoints to be vulnerable to denial-of-service-type problems, which must be mitigated to ensure reliability of services based on this standard. These results are relevant to data integration activities in translational bioinformatics. The gene expression search applications and SPARQL endpoints developed for OpenFlyData are deployed at http://openflydata.org. FlyUI, a library of JavaScript widgets providing re-usable user-interface components for Drosophila gene expression data, is available at http://flyui.googlecode.com. Software and ontologies to support transformation of data from FlyBase, FlyAtlas, BDGP and FlyTED to RDF are available at http://openflydata.googlecode.com. SPARQLite, an implementation of the SPARQL protocol, is available at http://sparqlite.googlecode.com. All software is provided under the GPL version 3 open source license.

  13. WEB-IS2: Next Generation Web Services Using Amira Visualization Package

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.

    2003-12-01

    Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the services (Erlebacher, G., Yuen, D.A., and F. Dubuffet, Current trends and demands in visualization in the geosciences, Electron. Geosciences, 4, 2001).

  14. Virtual Solar Observatory Distributed Query Construction

    NASA Technical Reports Server (NTRS)

    Gurman, J. B.; Dimitoglou, G.; Bogart, R.; Davey, A.; Hill, F.; Martens, P.

    2003-01-01

    Through a prototype implementation (Tian et al., this meeting) the VSO has already demonstrated the capability of unifying geographically distributed data sources following the Web Services paradigm and utilizing mechanisms such as the Simple Object Access Protocol (SOAP). So far, four participating sites (Stanford, Montana State University, National Solar Observatory and the Solar Data Analysis Center) permit Web-accessible, time-based searches that allow browse access to a number of diverse data sets. Our latest work includes the extension of the simple, time-based queries to include numerous other searchable observation parameters. For VSO users, this extended functionality enables more refined searches. For the VSO, it is a proof of concept that more complex, distributed queries can be effectively constructed and that results from heterogeneous, remote sources can be synthesized and presented to users as a single, virtual data product.

  15. Developing Federated Services within Seismology: IRIS' involvement in the CoopEUS Project

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Trabant, C. M.; Stults, M.

    2014-12-01

    As a founding member of the CoopEUS initiative, IRIS Data Services has partnered with five data centers in Europe and the UC Berkeley (NCEDC) in the US to implement internationally standardized web services to access seismological data using identical methodologies. The International Federation of Digital Seismograph Networks (FDSN) holds commission status within IASPEI/IUGG and as such is the international body that governs data exchange formats and access protocols within seismology. The CoopEUS project involves IRIS and UNAVCO as part of the EarthScope project and the European collaborators are all members of the European Plate Observing System (EPOS). CoopEUS includes one work package that attempts to coordinate data access between EarthScope and EPOS facilities. IRIS has worked with its partners in the FDSN to develop and adopt three key international service standards within seismology. These include 1) fdsn-dataselect, a service that returns time series data in a variety of standard formats, 2) fdsn-station, a service that returns related metadata about a seismic station in stationXML format, and 3) fdsn-event, a service that returns information about earthquakes and other seismic events in QuakeML format. Currently the 5 European data centers supporting these services include the ORFEUS Data Centre in the Netherlands, the GFZ German Research Centre for Geosciences in Potsdam, Germany, ETH Zurich in Switzerland, INGV in Rome, Italy, and the RESIF Data Centre in Grenoble France. Presently these seven centres can all be accessed using standardized web services with identical service calls and returns results in standardized ways. IRIS is developing an IRIS federator that will allow a client to seamlessly access information across the federated centers. Details and current status of the IRIS Federator will be presented.

  16. The NOAO Data Lab PHAT Photometry Database

    NASA Astrophysics Data System (ADS)

    Olsen, Knut; Williams, Ben; Fitzpatrick, Michael; PHAT Team

    2018-01-01

    We present a database containing both the combined photometric object catalog and the single epoch measurements from the Panchromatic Hubble Andromeda Treasury (PHAT). This database is hosted by the NOAO Data Lab (http://datalab.noao.edu), and as such exposes a number of data services to the PHAT photometry, including access through a Table Access Protocol (TAP) service, direct PostgreSQL queries, web-based and programmatic query interfaces, remote storage space for personal database tables and files, and a JupyterHub-based Notebook analysis environment, as well as image access through a Simple Image Access (SIA) service. We show how the Data Lab database and Jupyter Notebook environment allow for straightforward and efficient analyses of PHAT catalog data, including maps of object density, depth, and color, extraction of light curves of variable objects, and proper motion exploration.

  17. A Role for Semantic Web Technologies in Patient Record Data Collection

    NASA Astrophysics Data System (ADS)

    Ogbuji, Chimezie

    Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.

  18. WHO Mental Health Gap Action Programme (mhGAP) Intervention Guide: a systematic review of evidence from low and middle-income countries.

    PubMed

    Keynejad, Roxanne C; Dua, Tarun; Barbui, Corrado; Thornicroft, Graham

    2018-02-01

    Despite mental, neurological and substance use (MNS) disorders being highly prevalent, there is a worldwide gap between service need and provision. WHO launched its Mental Health Gap Action Programme (mhGAP) in 2008, and the Intervention Guide (mhGAP-IG) in 2010. mhGAP-IG provides evidence-based guidance and tools for assessment and integrated management of priority MNS disorders in low and middle-income countries (LMICs), using clinical decision-making protocols. It targets a non-specialised primary healthcare audience, but has also been used by ministries, non-governmental organisations and academics, for mental health service scale-up in 90 countries. This review aimed to identify evidence to date for mhGAP-IG implementation in LMICs. We searched MEDLINE, Embase, PsycINFO, Web of Knowledge/Web of Science, Scopus, CINAHL, LILACS, SciELO/Web of Science, Cochrane, Pubmed databases and Google Scholar for studies reporting evidence, experience or evaluation of mhGAP-IG in LMICs, in any language. Data were extracted from included papers, but heterogeneity prevented meta-analysis. We conducted a systematic review of evidence to date, of mhGAP-IG implementation and evaluation in LMICs. Thirty-three included studies reported 15 training courses, 9 clinical implementations, 3 country contextualisations, 3 economic models, 2 uses as control interventions and 1 use to develop a rating scale. Our review identified the importance of detailed reports of contextual challenges in the field, alongside detailed protocols, qualitative studies and randomised controlled trials. The mhGAP-IG literature is substantial, relative to other published evaluations of clinical practice guidelines: an important contribution to a neglected field. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  20. Automatic geospatial information Web service composition based on ontology interface matching

    NASA Astrophysics Data System (ADS)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  1. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Astrophysics Data System (ADS)

    Won, Y. I.; Wanchoo, L.; Behnke, J.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) stores and distributes data from EOS satellites, as well as ancillary, airborne, in-situ, and socio-economic data. Twelve EOSDIS data centers support different scientific disciplines by providing products and services tailored to specific science communities. Although discipline oriented, these data centers provide common data management functions of ingest, archive and distribution, as well as documentation of their data and services on their web-sites. The Earth Science Data and Information System (ESDIS) Project collects these metrics from the EOSDIS data centers on a daily basis through a tool called the ESDIS Metrics System (EMS). These metrics are used in this study. The implementation of the Earthdata Login - formerly known as the User Registration System (URS) - across the various NASA data centers provides the EMS additional information about users obtaining data products from EOSDIS data centers. These additional user attributes collected by the Earthdata login, such as the user's primary area of study can augment the understanding of data usage, which in turn can help the EOSDIS program better understand the users' needs. This study will review the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements such as discipline and study area, will assist in understanding how the users have evolved over time. The data usage pattern based on version numbers may also provide some insight into the level of data quality. In addition, the data metrics by various services such as the Open-source Project for a Network Data Access Protocol (OPeNDAP), Web Map Service (WMS), Web Coverage Service (WCS), and subsets, will address how these services have extended the usage of data. Over-all, this study will present the usage of data and metadata by metrics analyses and will assist data centers in better supporting the needs of the users.

  2. How Public Is the Web?: Robots, Access, and Scholarly Communication.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Rosenbaum, Howard

    1998-01-01

    Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…

  3. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  4. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  5. Design and development of an IoT-based web application for an intelligent remote SCADA system

    NASA Astrophysics Data System (ADS)

    Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long

    2018-03-01

    This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.

  6. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  7. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  8. The Use of RESTful Web Services in Medical Informatics and Clinical Research and Its Implementation in Europe.

    PubMed

    Aerts, Jozef

    2017-01-01

    RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.

  9. Extending OPeNDAP's Data-Access Protocol to Include Enhanced Pre-Retrieval Operations

    NASA Astrophysics Data System (ADS)

    Fulker, D. W.

    2013-12-01

    We describe plans to extend OPeNDAP's Web-services protocol as a Building Block for NSF's EarthCube initiative. Though some data-access services have offered forms of subset-selection for decades, other pre-retrieval operations have been unavailable, in part because their benefits (over equivalent post-retrieval actions) are only now becoming fully evident. This is due in part to rapid growth in the volumes of data that are pertinent to the geosciences, exacerbated by limitations such as Internet speeds and latencies as well as pressures toward data usage on ever-smaller devices. In this context, as recipients of a "Building Blocks" award from the most recent round of EarthCube funding, we are launching the specification and prototype implementation of a new Open Data Services Invocation Protocol (ODSIP), by which clients may invoke a newly rich set of data-acquisition services, ranging from statistical summarization and criteria-driven subsetting to re-gridding/resampling. ODSIP will be an extension to DAP4, the latest version of OPeNDAP's widely used data access protocol, which underpins a number of open-source, multilingual, client-server systems (offering data access as a Web service), including THREDDS, PyDAP, GrADS, ERDAP and FERRET, as well as OPeNDAP's own Hyrax servers. We are motivated by the idea that key parts of EarthCube can be built effectively around clients and servers that employ a common and conceptually rich protocol for data acquisition. This concept extends 'data provision' to include pre-retrieval operations that, even when invoked by remote clients, exhibit efficiencies of data-proximate computation. Our aim for ODSIP is to embed a largely domain-neutral algebra of server functions that, despite being deliberately compact, can fulfill a broad range of user needs for pre-retrieval operations. To that end, our approach builds upon languages and tools that have proven effective in multi-domain contexts, and we will employ a user-centered design process built around three science scenarios: 1) accelerated visualization/analysis of model outputs on non-rectangular meshes (over coastal North Carolina); 2) dynamic downscaling of climate predictions for regional utility (over Hawaii); and 3) feature-oriented retrievals of satellite imagery (focusing on satellite-derived sea-surface-temperature fronts). These scenarios will test important aspects of the server-function algebra: * The Hawaii climate study requires coping with issues of scale on rectangular grids, placing strong emphasis on statistical functions. * The east-coast storm-surge study requires irregular grids, thus exploring mathematical challenges that have been addressed in many domains via the GridFields library, which we will employ. We think important classes of geoscience problems in multiple domains--where dealing with discontinuities, for example--are essentially intractable without polygonal meshes. * The sea-surface fronts study integrates vector-style features with array-style coverages, thus touching on the kinds of mathematics that arise when mixing Eulerian and Lagrangian frameworks. Our presentation will sketch the context for ODSIP, our process for a user-centered design, and our hopes for how ODSIP, as an emerging cyberinfrastructure concept for the Geosciences, may serve as a fundamental building block for EarthCube.

  10. First Prototype of a Web Map Interface for ESA's Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Manaud, N.; Gonzalez, J.

    2014-04-01

    We present a first prototype of a Web Map Interface that will serve as a proof of concept and design for ESA's future fully web-based Planetary Science Archive (PSA) User Interface. The PSA is ESA's planetary science archiving authority and central repository for all scientific and engineering data returned by ESA's Solar System missions [1]. All data are compliant with NASA's Planetary Data System (PDS) Standards and are accessible through several interfaces [2]: in addition to serving all public data via FTP and the Planetary Data Access Protocol (PDAP), a Java-based User Interface provides advanced search, preview, download, notification and delivery-basket functionality. It allows the user to query and visualise instrument observations footprints using a map-based interface (currently only available for Mars Express HRSC and OMEGA instruments). During the last decade, the planetary mapping science community has increasingly been adopting Geographic Information System (GIS) tools and standards, originally developed for and used in Earth science. There is an ongoing effort to produce and share cartographic products through Open Geospatial Consortium (OGC) Web Services, or as standalone data sets, so that they can be readily used in existing GIS applications [3,4,5]. Previous studies conducted at ESAC [6,7] have helped identify the needs of Planetary GIS users, and define key areas of improvement for the future Web PSA User Interface. Its web map interface shall will provide access to the full geospatial content of the PSA, including (1) observation geometry footprints of all remote sensing instruments, and (2) all georeferenced cartographic products, such as HRSC map-projected data or OMEGA global maps from Mars Express. It shall aim to provide a rich user experience for search and visualisation of this content using modern and interactive web mapping technology. A comprehensive set of built-in context maps from external sources, such as MOLA topography, TES infrared maps or planetary surface nomenclature, provided in both simple cylindrical and polar stereographic projections, shall enhance this user experience. In addition, users should be able to import and export data in commonly used open- GIS formats. It is also intended to serve all PSA geospatial data through OGC-compliant Web Services so that they can be captured, visualised and analysed directly from GIS software, along with data from other sources. The following figure illustrates how the PSA web map interface and services shall fit in a typical Planetary GIS user working environment.

  11. Web service module for access to g-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.; Goranov, G.

    2012-10-01

    G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.

  12. Wireless access to a pharmaceutical database: a demonstrator for data driven Wireless Application Protocol (WAP) applications in medical information processing.

    PubMed

    Schacht Hansen, M; Dørup, J

    2001-01-01

    The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.

  13. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing

    PubMed Central

    Hansen, Michael Schacht

    2001-01-01

    Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946

  14. A web-based intervention for health professionals and patients to decrease cardiovascular risk attributable to physical inactivity: development process.

    PubMed

    Sassen, Barbara; Kok, Gerjo; Mesters, Ilse; Crutzen, Rik; Cremers, Anita; Vanhees, Luc

    2012-12-14

    Patients with cardiovascular risk factors can reduce their risk of cardiovascular disease by increasing their physical activity and their physical fitness. According to the guidelines for cardiovascular risk management, health professionals should encourage their patients to engage in physical activity. In this paper, we provide insight regarding the systematic development of a Web-based intervention for both health professionals and patients with cardiovascular risk factors using the development method Intervention Mapping. The different steps of Intervention Mapping are described to open up the "black box" of Web-based intervention development and to support future Web-based intervention development. The development of the Professional and Patient Intention and Behavior Intervention (PIB2 intervention) was initiated with a needs assessment for both health professionals (ie, physiotherapy and nursing) and their patients. We formulated performance and change objectives and, subsequently, theory- and evidence-based intervention methods and strategies were selected that were thought to affect the intention and behavior of health professionals and patients. The rationale of the intervention was based on different behavioral change methods that allowed us to describe the scope and sequence of the intervention and produced the Web-based intervention components. The Web-based intervention consisted of 5 modules, including individualized messages and self-completion forms, and charts and tables. The systematic and planned development of the PIB2 intervention resulted in an Internet-delivered behavior change intervention. The intervention was not developed as a substitute for face-to-face contact between professionals and patients, but as an application to complement and optimize health services. The focus of the Web-based intervention was to extend professional behavior of health care professionals, as well as to improve the risk-reduction behavior of patients with cardiovascular risk factors. The Intervention Mapping protocol provided a systematic method for developing the intervention and each intervention design choice was carefully thought-out and justified. Although it was not a rapid or an easy method for developing an intervention, the protocol guided and directed the development process. The application of evidence-based behavior change methods used in our intervention offers insight regarding how an intervention may change intention and health behavior. The Web-based intervention appeared feasible and was implemented. Further research will test the effectiveness of the PIB2 intervention. Dutch Trial Register, Trial ID: ECP-92.

  15. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  16. Global Ocean Currents Database

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Sun, L.

    2016-02-01

    The NOAA's National Centers for Environmental Information has released an ocean currents database portal that aims 1) to integrate global ocean currents observations from a variety of instruments with different resolution, accuracy and response to spatial and temporal variability into a uniform network common data form (NetCDF) format and 2) to provide a dedicated online data discovery, access to NCEI-hosted and distributed data sources for ocean currents data. The portal provides a tailored web application that allows users to search for ocean currents data by platform types and spatial/temporal ranges of their interest. The dedicated web application is available at http://www.nodc.noaa.gov/gocd/index.html. The NetCDF format supports widely-used data access protocols and catalog services such as OPeNDAP (Open-source Project for a Network Data Access Protocol) and THREDDS (Thematic Real-time Environmental Distributed Data Services), which the GOCD users can use data files with their favorite analysis and visualization client software without downloading to their local machine. The potential users of the ocean currents database include, but are not limited to, 1) ocean modelers for their model skills assessments, 2) scientists and researchers for studying the impact of ocean circulations on the climate variability, 3) ocean shipping industry for safety navigation and finding optimal routes for ship fuel efficiency, 4) ocean resources managers while planning for the optimal sites for wastes and sewages dumping and for renewable hydro-kinematic energy, and 5) state and federal governments to provide historical (analyzed) ocean circulations as an aid for search and rescue

  17. Sensor Management for Applied Research Technologies (SMART)-On Demand Modeling (ODM) Project

    NASA Technical Reports Server (NTRS)

    Goodman, M.; Blakeslee, R.; Hood, R.; Jedlovec, G.; Botts, M.; Li, X.

    2006-01-01

    NASA requires timely on-demand data and analysis capabilities to enable practical benefits of Earth science observations. However, a significant challenge exists in accessing and integrating data from multiple sensors or platforms to address Earth science problems because of the large data volumes, varying sensor scan characteristics, unique orbital coverage, and the steep learning curve associated with each sensor and data type. The development of sensor web capabilities to autonomously process these data streams (whether real-time or archived) provides an opportunity to overcome these obstacles and facilitate the integration and synthesis of Earth science data and weather model output. A three year project, entitled Sensor Management for Applied Research Technologies (SMART) - On Demand Modeling (ODM), will develop and demonstrate the readiness of Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) capabilities that integrate both Earth observations and forecast model output into new data acquisition and assimilation strategies. The advancement of SWE-enabled systems (i.e., use of SensorML, sensor planning services - SPS, sensor observation services - SOS, sensor alert services - SAS and common observation model protocols) will have practical and efficient uses in the Earth science community for enhanced data set generation, real-time data assimilation with operational applications, and for autonomous sensor tasking for unique data collection.

  18. Online Tobacco Cessation Training and Competency Assessment for Complementary and Alternative Medicine (CAM) Practitioners: Protocol for the CAM Reach Web Study

    PubMed Central

    Howerter, Amy; Eaves, Emery R; Hall, John R; Buller, David B; Gordon, Judith S

    2016-01-01

    Background Complementary and alternative medicine (CAM) practitioners, such as chiropractors, acupuncturists, and massage therapists, are a growing presence in the US health care landscape and already provide health and wellness care to significant numbers of patients who use tobacco. For decades, conventional biomedical practitioners have received training to provide evidence-based tobacco cessation brief interventions (BIs) and referrals to cessation services as part of routine clinical care, whereas CAM practitioners have been largely overlooked for BI training. Web-based training has clear potential to meet large-scale training dissemination needs. However, despite the exploding use of Web-based training for health professionals, Web-based evaluation of clinical skills competency remains underdeveloped. Objective In pursuit of a long-term goal of helping CAM practitioners integrate evidence-based practices from US Public Health Service Tobacco Dependence Treatment Guideline into routine clinical care, this pilot protocol aims to develop and test a Web-based tobacco cessation training program tailored for CAM practitioners. Methods In preparation for a larger trial to examine the effect of training on CAM practitioner clinical practice behaviors around tobacco cessation, this developmental study will (1) adapt an existing in-person tobacco cessation BI training program that is specifically tailored for CAM therapists for delivery via the Internet; (2) develop a novel, Web-based tool to assess CAM practitioner competence in tobacco cessation BI skills, and conduct a pilot validation study comparing the competency assessment tool to live video role plays with a standardized patient; (3) pilot test the Web-based training with 120 CAM practitioners (40 acupuncturists, 40 chiropractors, 40 massage therapists) for usability, accessibility, acceptability, and effects on practitioner knowledge, self-efficacy, and competency with tobacco cessation; and (4) conduct qualitative and quantitative formative research on factors influencing practitioner tobacco cessation clinical behaviors (eg, practice environment, peer social influence, and insurance reimbursement). Results Web-training and competency assessment tool development and study enrollment and training activities are complete (N=203 practitioners enrolled). Training completion rates were lower than expected (36.9%, 75/203), necessitating over enrollment to ensure a sufficient number of training completers. Follow-up data collection is in progress. Data analysis will begin immediately after data collection is complete. Conclusions To realize CAM practitioners’ potential to promote tobacco cessation and use of evidence-based treatments, there is a need to know more about the facilitative and inhibitory factors influencing CAM practitioner tobacco intervention behaviors (eg, social influence and insurance reimbursement). Given marked differences between conventional and CAM practitioners, extant knowledge about factors influencing conventional practitioner adoption of tobacco cessation behaviors cannot be confidently extrapolated to CAM practitioners. The potential impact of this study is to expand tobacco cessation and health promotion infrastructure in a new group of health practitioners who can help combat the continuing epidemic of tobacco use. PMID:26740468

  19. Web Services--A Buzz Word with Potentials

    Treesearch

    János T. Füstös

    2006-01-01

    The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...

  20. Research on the development and preliminary application of Beijing agricultural sci-tech service hotline WebApp in agricultural consulting services

    NASA Astrophysics Data System (ADS)

    Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong

    2017-09-01

    To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.

  1. Baby Steps: Starting Out on the World Wide Web.

    ERIC Educational Resources Information Center

    Simpson, Carol; McElmeel, Sharron L.

    1997-01-01

    While the Internet is the physical medium used to transport data, the World Wide Web is the collection of protocols and standards used to access the information. This article provides a basic explanation of what the Web is and describes common browser commands. Discusses graphic Web browsers; universal resource locators (URLs); file, message,…

  2. A performance study of WebDav access to storages within the Belle II collaboration

    NASA Astrophysics Data System (ADS)

    Pardi, S.; Russo, G.

    2017-10-01

    WebDav and HTTP are becoming popular protocols for data access in the High Energy Physics community. The most used Grid and Cloud storage solutions provide such kind of interfaces, in this scenario tuning and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community. In this work, we present the results of a large-scale test activity, made with the goal to evaluate performances and reliability of the WebDav protocol, and study a possible adoption for the user analysis. More specifically, we considered a pilot infrastructure composed by a set of storage elements configured with the WebDav interface, hosted at the Belle II sites. The performance tests include a comparison with xrootd and gridftp. As reference tests we used a set of analysis jobs running under the Belle II software framework, accessing the input data with the ROOT I/O library, in order to simulate as much as possible a realistic user activity. The final analysis shows the possibility to achieve promising performances with WebDav on different storage systems, and gives an interesting feedback, for Belle II community and for other high energy physics experiments.

  3. IMPlementation of A Relatives' Toolkit (IMPART study): an iterative case study to identify key factors impacting on the implementation of a web-based supported self-management intervention for relatives of people with psychosis or bipolar experiences in a National Health Service: a study protocol.

    PubMed

    Lobban, Fiona; Appleton, Victoria; Appelbe, Duncan; Barraclough, Johanna; Bowland, Julie; Fisher, Naomi R; Foster, Sheena; Johnson, Sonia; Lewis, Elizabeth; Mateus, Céu; Mezes, Barbara; Murray, Elizabeth; O'Hanlon, Puffin; Pinfold, Vanessa; Rycroft-Malone, Jo; Siddle, Ron; Smith, Jo; Sutton, Chris J; Walker, Andrew; Jones, Steven H

    2017-12-28

    Web-based interventions to support people to manage long-term health conditions are available and effective but rarely used in clinical services. The aim of this study is to identify critical factors impacting on the implementation of an online supported self-management intervention for relatives of people with recent onset psychosis or bipolar disorder into routine clinical care and to use this information to inform an implementation plan to facilitate widespread use and inform wider implementation of digital health interventions. A multiple case study design within six early intervention in psychosis (EIP) services in England, will be used to test and refine theory-driven hypotheses about factors impacting on implementation of the Relatives' Education And Coping Toolkit (REACT). Qualitative data including behavioural observation, document analysis, and in-depth interviews collected in the first two EIP services (wave 1) and analysed using framework analysis, combined with quantitative data describing levels of use by staff and relatives and impact on relatives' distress and wellbeing, will be used to identify factors impacting on implementation. Consultation via stakeholder workshops with staff and relatives and co-facilitated by relatives in the research team will inform development of an implementation plan to address these factors, which will be evaluated and refined in the four subsequent EIP services in waves 2 and 3. Transferability of the implementation plan to non-participating services will be explored. Observation of implementation in a real world clinical setting, across carefully sampled services, in real time provides a unique opportunity to understand factors impacting on implementation likely to be generalizable to other web-based interventions, as well as informing further development of implementation theories. However, there are inherent challenges in investigating implementation without influencing the process under observation. We outline our strategies to ensure our design is transparent, flexible, and responsive to the timescales and activities happening within each service whilst also meeting the aims of the project. ISCTRN 16267685 (09/03/2016).

  4. Enabling Secure XMPP Communications in Federated IoT Clouds Through XEP 0027 and SAML/SASL SSO

    PubMed Central

    Celesti, Antonio; Fazio, Maria; Villari, Massimo

    2017-01-01

    Nowadays, in the panorama of Internet of Things (IoT), finding a right compromise between interactivity and security is not trivial at all. Currently, most of pervasive communication technologies are designed to work locally. As a consequence, the development of large-scale Internet services and applications is not so easy for IoT Cloud providers. The main issue is that both IoT architectures and services have started as simple but they are becoming more and more complex. Consequently, the web service technology is often inappropriate. Recently, many operators in both academia and industry fields are considering the possibility to adopt the eXtensible Messaging and Presence Protocol (XMPP) for the implementation of IoT Cloud communication systems. In fact, XMPP offers many advantages in term of real-time capabilities, efficient data distribution, service discovery and inter-domain communication compared to other technologies. Nevertheless, the protocol lacks of native security, data confidentiality and trustworthy federation features. In this paper, considering an XMPP-based IoT Cloud architectural model, we discuss how can be possible to enforce message signing/encryption and Single-Sign On (SSO) authentication respectively for secure inter-module and inter-domain communications in a federated environment. Experiments prove that security mechanisms introduce an acceptable overhead, considering the obvious advantages achieved in terms of data trustiness and privacy. PMID:28178214

  5. Enabling Secure XMPP Communications in Federated IoT Clouds Through XEP 0027 and SAML/SASL SSO.

    PubMed

    Celesti, Antonio; Fazio, Maria; Villari, Massimo

    2017-02-07

    Nowadays, in the panorama of Internet of Things (IoT), finding a right compromise between interactivity and security is not trivial at all. Currently, most of pervasive communication technologies are designed to work locally. As a consequence, the development of large-scale Internet services and applications is not so easy for IoT Cloud providers. The main issue is that both IoT architectures and services have started as simple but they are becoming more and more complex. Consequently, the web service technology is often inappropriate. Recently, many operators in both academia and industry fields are considering the possibility to adopt the eXtensible Messaging and Presence Protocol (XMPP) for the implementation of IoT Cloud communication systems. In fact, XMPP offers many advantages in term of real-time capabilities, efficient data distribution, service discovery and inter-domain communication compared to other technologies. Nevertheless, the protocol lacks of native security, data confidentiality and trustworthy federation features. In this paper, considering an XMPP-based IoT Cloud architectural model, we discuss how can be possible to enforce message signing/encryption and Single-Sign On (SSO) authentication respectively for secure inter-module and inter-domain communications in a federated environment. Experiments prove that security mechanisms introduce an acceptable overhead, considering the obvious advantages achieved in terms of data trustiness and privacy.

  6. Supporting the Delivery of Total Knee Replacements Care for Both Patients and Their Clinicians With a Mobile App and Web-Based Tool: Randomized Controlled Trial Protocol.

    PubMed

    Hussain, M Sazzad; Li, Jane; Brindal, Emily; van Kasteren, Yasmin; Varnfield, Marlien; Reeson, Andrew; Berkovsky, Shlomo; Freyne, Jill

    2017-03-01

    Total knee replacement (TKR) surgeries have increased in recent years. Exercise programs and other interventions following surgery can facilitate the recovery process. With limited clinician contact time, patients with TKR have a substantial burden of self-management and limited communication with their care team, thus often fail to implement an effective rehabilitation plan. We have developed a digital orthopedic rehabilitation platform that comprises a mobile phone app, wearable activity tracker, and clinical Web portal in order to engage patients with self-management tasks for surgical preparation and recovery, thus addressing the challenges of adherence to and completion of TKR rehabilitation. The study will determine the efficacy of the TKR platform in delivering information and assistance to patients in their preparation and recovery from TKR surgery and a Web portal for clinician care teams (ie, surgeons and physiotherapists) to remotely support and monitor patient progress. The study will evaluate the TKR platform through a randomized controlled trial conducted at multiple sites (N=5) in a number of states in Australia with 320 patients undergoing TKR surgery; the trial will run for 13 months for each patient. Participants will be randomized to either a control group or an intervention group, both receiving usual care as provided by their hospital. The intervention group will receive the app and wearable activity tracker. Participants will be assessed at 4 different time points: 4 weeks before surgery, immediately before surgery, 12 weeks after surgery, and 52 weeks after surgery. The primary outcome measure is the Oxford Knee Score. Secondary outcome measures include quality of life (Short-Form Health Survey); depression, anxiety, and stress (Depression, Anxiety, and Stress Scales); self-motivation; self-determination; self-efficacy; and the level of satisfaction with the knee surgery and care delivery. The study will also collect quantitative usage data related to all components (app, activity tracker, and Web portal) of the TKR platform and qualitative data on the perceptions of the platform as a tool for patients, carers, and clinicians. Finally, an economic evaluation of the impact of the platform will be conducted. Development of the TKR platform has been completed and deployed for trial. The research protocol is approved by 2 human research ethics committees in Australia. A total of 5 hospitals in Australia (2 in New South Wales, 2 in Queensland, and 1 in South Australia) are expected to participate in the trial. The TKR platform is designed to provide flexibility in care delivery and increased engagement with rehabilitation services. This trial will investigate the clinical and behavioral efficacy of the app and impact of the TKR platform in terms of service satisfaction, acceptance, and economic benefits of the provision of digital services. Australian New Zealand Clinical Trials Registry (ANZCTR) ACTRN12616000504415; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=370536 (Archived by WebCite at http://www.webcitation.org/6oKES0Gp1). ©M Sazzad Hussain, Jane Li, Emily Brindal, Yasmin van Kasteren, Marlien Varnfield, Andrew Reeson, Shlomo Berkovsky, Jill Freyne. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 01.03.2017.

  7. Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service

    PubMed Central

    Hatano, Kenji; Ohe, Kazuhiko

    2003-01-01

    Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364

  8. Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things.

    PubMed

    Jazayeri, Mohammad Ali; Liang, Steve H L; Huang, Chih-Yuan

    2015-09-22

    Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision.

  9. River Basin Standards Interoperability Pilot

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Masó, Joan; Stasch, Christoph

    2016-04-01

    There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture tests the combination of Gauge data in a WPS that is triggered by a meteorological alert. The data is translated into OGC WaterML 2.0 time series data format and will be ingested in a SOS 2.0. SOS data is visualized in a SOS Client that is able to handle time series. The meteorological forecast data (with the supervision of an operator manipulating the WPS user interface) ingests with WaterML 2.0 time series and terrain data is input for a flooding modelling algorithm. The WPS is able to produce flooding datasets in the form of coverages that is offered to clients via a WCS 2.0 service or a WMS 1.3 service, and downloaded and visualized by the respective clients. The WPS triggers a notification or an alert that will be monitored from an emergency control response service. Acronyms AS: Alert Service ES: Event Service ICT: Information and Communication Technology NS: Notification Service OGC: Open Geospatial Consortium RIBASE: River Basin Standards Interoperability Pilot SOS: Sensor Observation Service WaterML: Water Markup Language WCS: Web Coverage Service WMS: Web Map Service WPS: Web Processing Service

  10. Uncertainty in exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer; Helle, Kristina; Christoph, Stasch; Rasouli, Soora; Timmermans, Harry; Walker, Sam-Erik; Denby, Bruce

    2013-04-01

    To assess exposure to air pollution for a person or for a group of people, one needs to know where the person or group is as a function of time, and what the air pollution is at these times and locations. In this study we used the Albatross activity-based model to assess the whereabouts of people and the uncertainties in this, and a probabilistic air quality system based on TAPM/EPISODE to assess air quality probabilistically. The outcomes of the two models were combined to assess exposure to air pollution, and the errors in it. We used the area around Rotterdam (Netherlands) as a case study. As the outcomes of both models come as Monte Carlo realizations, it was relatively easy to cancel one of the sources of uncertainty (movement of persons, air pollution) in order to identify their respective contributions, and also to compare evaluations for individuals with averages for a population of persons. As the output is probabilistic, and in addition spatially and temporally varying, the visual analysis of the complete results poses some challenges. This case study was one of the test cases in the UncertWeb project, which has built concepts and tools to realize the uncertainty-enabled model web. Some of the tools and protocols will be shown and evaluated in this presentation. For the uncertainty of exposure, the uncertainty of air quality was more important than the uncertainty of peoples locations. This difference was stronger for PM10 than for NO2. The workflow was implemented as generic Web services in UncertWeb that also allow for other inputs than the simulated activity schedules and air quality with other resolution. However, due to this flexibility, the Web services require standardized formats and the overlay algorithm is not optimized for the specific use case resulting in a data and processing overhead. Hence, we implemented the full analysis in parallel in R, for this specific case as the model web solution had difficulties with massive data.

  11. Improving investigational drug service operations through development of an innovative computer system.

    PubMed

    Sweet, Burgunda V; Tamer, Helen R; Siden, Rivka; McCreadie, Scott R; McGregory, Michael E; Benner, Todd; Tankanow, Roberta M

    2008-05-15

    The development of a computerized system for protocol management, dispensing, inventory accountability, and billing by the investigational drug service (IDS) of a university health system is described. After an unsuccessful search for a commercial system that would accommodate the variation among investigational protocols and meet regulatory requirements, the IDS worked with the health-system pharmacy's information technology staff and informatics pharmacists to develop its own system. The informatics pharmacists observed work-flow and information capture in the IDS and identified opportunities for improved efficiency with an automated system. An iterative build-test-design process was used to provide the flexibility needed for individual protocols. The intent was to design a system that would support most IDS processes, using components that would allow automated backup and redundancies. A browser-based system was chosen to allow remote access. Servers, bar-code scanners, and printers were integrated into the final system design. Initial implementation involved 10 investigational protocols chosen on the basis of dispensing volume and complexity of study design. Other protocols were added over a two-year period; all studies whose drugs were dispensed from the IDS were added, followed by those for which the drugs were dispensed from decentralized pharmacy areas. The IDS briefly used temporary staff to free pharmacist and technician time for system implementation. Decentralized pharmacy areas that rarely dispense investigational drugs continue to use manual processes, with subsequent data transcription into the system. Through the university's technology transfer division, the system was licensed by an external company for sale to other IDSs. The WebIDS system has improved daily operations, enhanced safety and efficiency, and helped meet regulatory requirements for investigational drugs.

  12. Persistence and availability of Web services in computational biology.

    PubMed

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  13. Persistence and Availability of Web Services in Computational Biology

    PubMed Central

    Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383

  14. RSAT 2015: Regulatory Sequence Analysis Tools.

    PubMed

    Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques

    2015-07-01

    RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  16. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  17. The EMBRACE web service collection

    PubMed Central

    Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862

  18. Study protocol for a pragmatic randomised controlled trial evaluating efficacy of a smoking cessation e-'Tabac Info Service': ee-TIS trial.

    PubMed

    Cambon, L; Bergman, P; Le Faou, Al; Vincent, I; Le Maitre, B; Pasquereau, A; Arwidson, P; Thomas, D; Alla, F

    2017-02-24

    A French national smoking cessation service, Tabac Info Service, has been developed to provide an adapted quitline and a web and mobile application involving personalised contacts (eg, questionnaires, advice, activities, messages) to support smoking cessation. This paper presents the study protocol of the evaluation of the application (e-intervention Tabac Info Service (e-TIS)). The primary objective is to assess the efficacy of e-TIS. The secondary objectives are to (1) describe efficacy variations with regard to users' characteristics, (2) analyse mechanisms and contextual conditions of e-TIS efficacy. The study design is a two-arm pragmatic randomised controlled trial including a process evaluation with at least 3000 participants randomised to the intervention or to the control arm (current practices). Inclusion criteria are: aged 18 years or over, current smoker, having completed the online consent forms, possessing a mobile phone with android or apple systems and using mobile applications, wanting to stop smoking sooner or later. The primary outcome is the point prevalence abstinence of 7 days at 6 months later. Data will be analysed in intention to treat (primary) and per protocol analyses. A logistic regression will be carried out to estimate an OR (95% CI) for efficacy. A multivariate multilevel analysis will explore the influence on results of patients' characteristics (sex, age, education and socioprofessional levels, dependency, motivation, quit experiences) and contextual factors, conditions of use, behaviour change techniques. The study protocol was reviewed by the ethical and deontological institutional review board of the French Institute for Public Health Surveillance on 18 April 2016. The findings of this study will allow us to characterise the efficacy of e-TIS and conditions of its efficacy. These findings will be disseminated through peer-reviewed articles. NCT02841683; Pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  20. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  1. Access to the NCAR Research Data Archive via the Globus Data Transfer Service

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2014-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2013, 10,000 unique users downloaded greater than 820 terabytes of data from the RDA, and customized data products were prepared for more than 29,000 user-driven requests. In order to further support this increase in web download usage, the RDA is implementing the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for RDA users. This paper highlights the main functionality and usefulness of the Globus data transfer service for accessing the RDA holdings. The Globus data transfer service, developed and supported by the Computation Institute at The University of Chicago and Argonne National Laboratory, uses the GridFTP as a fast, secure, and reliable method for transferring data between two endpoints. A Globus user account is required to use this service, and data transfer endpoints are defined on the Globus web interface. In the RDA use cases, the access endpoint is created on the RDA data server at NCAR. The data user defines the receiving endpoint for the data transfer, which can be the main file system at a host institution, a personal work station, or laptop. Once initiated, the data transfer runs as an unattended background process by Globus, and Globus ensures that the transfer is accurately fulfilled. Users can monitor the data transfer progress on the Globus web interface and optionally receive an email notification once it is complete. Globus also provides a command-line interface to support scripted transfers, which can be useful when embedded in data processing workflows.

  2. A research-based child welfare employee selection protocol: strengthening retention of the workforce.

    PubMed

    Ellett, Alberta J; Ellett, Chad D; Ellis, Jacquelyn; Lerner, Betsy

    2009-01-01

    This article describes the development and initial implementation of a new employee selection protocol (ESP) for child welfare grounded in the results of recent large-scale employee retention studies and a set of research-based, minimally essential knowledge, skills, abilities, and values. The complete ESP consists of a sequenced set of Web- and site-based assessment processes and procedures for potential applicants. Using the ESP, applicants and employers make informed decisions about the goodness of fit between the applicant and the demands of a career in child welfare. To date, the new ESP has been piloted in three Georgia Division of Family and Children Services (DFCS) regions and implemented by all nine colleges and universities participating in IV-E child welfare education programs. Evaluation data collected from students and new employees in one DFCS region strongly support the value of the ESP Web-based activities to make a more informed decision about whether to apply for the IV-E stipends and child welfare positions. Feedback from trained ESP assessors supports the value of various ESP activities. A major goal of implementing the ESP is to select more professionally committed and highly qualified applicants to strengthen employee retention and outcomes for children and families.

  3. Enhancing UCSF Chimera through web services

    PubMed Central

    Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.

    2014-01-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624

  4. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  5. DelPhi web server v2: incorporating atomic-style geometrical figures into the computational protocol.

    PubMed

    Smith, Nicholas; Witham, Shawn; Sarkar, Subhra; Zhang, Jie; Li, Lin; Li, Chuan; Alexov, Emil

    2012-06-15

    A new edition of the DelPhi web server, DelPhi web server v2, is released to include atomic presentation of geometrical figures. These geometrical objects can be used to model nano-size objects together with real biological macromolecules. The position and size of the object can be manipulated by the user in real time until desired results are achieved. The server fixes structural defects, adds hydrogen atoms and calculates electrostatic energies and the corresponding electrostatic potential and ionic distributions. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhi software. The computation is carried out on supercomputer cluster and results are given back to the user via http protocol, including the ability to visualize the structure and corresponding electrostatic potential via Jmol implementation. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver.

  6. Real-time GIS data model and sensor web service platform for environmental data management.

    PubMed

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  7. WebTag: Web browsing into sensor tags over NFC.

    PubMed

    Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Alvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio

    2012-01-01

    Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm.

  8. WebTag: Web Browsing into Sensor Tags over NFC

    PubMed Central

    Echevarria, Juan Jose; Ruiz-de-Garibay, Jonathan; Legarda, Jon; Álvarez, Maite; Ayerbe, Ana; Vazquez, Juan Ignacio

    2012-01-01

    Information and Communication Technologies (ICTs) continue to overcome many of the challenges related to wireless sensor monitoring, such as for example the design of smarter embedded processors, the improvement of the network architectures, the development of efficient communication protocols or the maximization of the life cycle autonomy. This work tries to improve the communication link of the data transmission in wireless sensor monitoring. The upstream communication link is usually based on standard IP technologies, but the downstream side is always masked with the proprietary protocols used for the wireless link (like ZigBee, Bluetooth, RFID, etc.). This work presents a novel solution (WebTag) for a direct IP based access to a sensor tag over the Near Field Communication (NFC) technology for secure applications. WebTag allows a direct web access to the sensor tag by means of a standard web browser, it reads the sensor data, configures the sampling rate and implements IP based security policies. It is, definitely, a new step towards the evolution of the Internet of Things paradigm. PMID:23012511

  9. Integrating DXplain into a clinical information system using the World Wide Web.

    PubMed

    Elhanan, G; Socratous, S A; Cimino, J J

    1996-01-01

    The World Wide Web(WWW) offers a cross-platform environment and standard protocols that enable integration of various applications available on the Internet. The authors use the Web to facilitate interaction between their Web-based Clinical Information System and a decision-support system-DXplain, at the Massachusetts General Hospital-using local architecture and Common Gateway Interface programs. The current application translates patients laboratory test results into DXplain's terms to generate diagnostic hypotheses. Two different access methods are utilized for this model; Hypertext Transfer Protocol (HTTP) and TCP/IP function calls. While clinical aspects cannot be evaluated as yet, the model demonstrates the potential of Web-based applications for interaction and integration and how local architecture, with a controlled vocabulary server, can further facilitate such integration. This model serves to demonstrate some of the limitations of the current WWW technology and identifies issues such as control over Web resources and their utilization and liability issues as possible obstacles for further integration.

  10. Similarity Based Semantic Web Service Match

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Niu, Wenjia; Huang, Ronghuai

    Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.

  11. Boverhof's App Earns Honorable Mention in Amazon's Web Services

    Science.gov Websites

    » Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday

  12. Linked Data: what does it offer Earth Sciences?

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Schade, Sven

    2010-05-01

    'Linked Data' is a current buzz-phrase promoting access to various forms of data on the internet. It starts from the two principles that have underpinned the architecture and scalability of the World Wide Web: 1. Universal Resource Identifiers - using the http protocol which is supported by the DNS system. 2. Hypertext - in which URIs of related resources are embedded within a document. Browsing is the key mode of interaction, with traversal of links between resources under control of the client. Linked Data also adds, or re-emphasizes: • Content negotiation - whereby the client uses http headers to tell the service what representation of a resource is acceptable, • Semantic Web principles - formal semantics for links, following the RDF data model and encoding, and • The 'mashup' effect - in which original and unexpected value may emerge from reuse of data, even if published in raw or unpolished form. Linked Data promotes typed links to all kinds of data, so is where the semantic web meets the 'deep web', i.e. resources which may be accessed using web protocols, but are in representations not indexed by search engines. Earth sciences are data rich, but with a strong legacy of specialized formats managed and processed by disconnected applications. However, most contemporary research problems require a cross-disciplinary approach, in which the heterogeneity resulting from that legacy is a significant challenge. In this context, Linked Data clearly has much to offer the earth sciences. But, there are some important questions to answer. What is a resource? Most earth science data is organized in arrays and databases. A subset useful for a particular study is usually identified by a parameterized query. The Linked Data paradigm emerged from the world of documents, and will often only resolve data-sets. It is impractical to create even nested navigation resources containing links to all potentially useful objects or subsets. From the viewpoint of human user interfaces, the browse metaphor, which has been such an important part of the success of the web, must be augmented with other interaction mechanisms, including query. What are the impacts on search and metadata? Hypertext provides links selected by the page provider. However, science should endeavor to be exhaustive in its use of data. Resource discovery through links must be supplemented by more systematic data discovery through search. Conversely, the crawlers that generate search indexes must be fed by resource providers (a) serving navigation pages with links to every dataset (b) adding enough 'metadata' (semantics) on each link to effectively populate the indexes. Linked Data makes this easier due to its integration with semantic web technologies, including structured vocabularies. What is the relation between structured data and Linked Data? Linked Data has focused on web-pages (primarily HTML) for human browsing, and RDF for semantics, assuming that other representations are opaque. However, this overlooks the wealth of XML data on the web, some of which is structured according to XML Schemas that provide semantics. Technical applications can use content-negotiation to get a structured representation, and exploit its semantics. Particularly relevant for earth sciences are data representations based on OGC Geography Markup Language (GML), such as GeoSciML, O&M and MOLES. GML was strongly influenced by RDF, and typed links are intrinsic: xlink:href plays the role that rdf:resource does in RDF representations. Services which expose GML-formatted resources (such as OGC Web Feature Service) are a prototype of Linked Data. Giving credit where it is due. Organizations investing in data collection may be reluctant to publish the raw data prior to completing an initial analysis. To encourage early data publication the system must provide suitable incentives, and citation analysis must recognize the increasing diversity of publication routes and forms. Linked Data makes it easier to include rich citation information when data is both published and used.

  13. Biological Web Service Repositories Review

    PubMed Central

    Urdidiales‐Nieto, David; Navas‐Delgado, Ismael

    2016-01-01

    Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459

  14. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    NASA Astrophysics Data System (ADS)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  15. Development of a Web-Accessible Population Pharmacokinetic Service-Hemophilia (WAPPS-Hemo): Study Protocol.

    PubMed

    Iorio, Alfonso; Keepanasseril, Arun; Foster, Gary; Navarro-Ruan, Tamara; McEneny-King, Alanna; Edginton, Andrea N; Thabane, Lehana

    2016-12-15

    Individual pharmacokinetic assessment is a critical component of tailored prophylaxis for hemophilia patients. Population pharmacokinetics allows using individual sparse data, thus simplifying individual pharmacokinetic studies. Implementing population pharmacokinetics capacity for the hemophilia community is beyond individual reach and requires a system effort. The Web-Accessible Population Pharmacokinetic Service-Hemophilia (WAPPS-Hemo) project aims to assemble a database of patient pharmacokinetic data for all existing factor concentrates, develop and validate population pharmacokinetics models, and integrate these models within a Web-based calculator for individualized pharmacokinetic estimation in patients at participating treatment centers. Individual pharmacokinetic studies on factor VIII and IX concentrates will be sourced from pharmaceutical companies and independent investigators. All factor concentrate manufacturers, hemophilia treatment centers (HTCs), and independent investigators (identified via a systematic review of the literature) having on file pharmacokinetic data and willing to contribute full or sparse pharmacokinetic data will be eligible for participation. Multicompartmental modeling will be performed using a mixed-model approach for derivation and Bayesian forecasting for estimation of individual sparse data. NONMEM (ICON Development Solutions) will be used as modeling software. The WAPPS-Hemo research network has been launched and is currently joined by 30 HTCs from across the world. We have gathered dense individual pharmacokinetic data on 878 subjects, including several replicates, on 21 different molecules from 17 different sources. We have collected sparse individual pharmacokinetic data on 289 subjects from the participating centers through the testing phase of the WAPPS-Hemo Web interface. We have developed prototypal population pharmacokinetics models for 11 molecules. The WAPPS-Hemo website (available at www.wapps-hemo.org, version 2.4), with core functionalities allowing hemophilia treaters to obtain individual pharmacokinetic estimates on sparse data points after 1 or more infusions of a factor concentrate, was launched for use within the research network in July 2015. The WAPPS-Hemo project and research network aims to make it easier to perform individual pharmacokinetic assessments on a reduced number of plasma samples by adoption of a population pharmacokinetics approach. The project will also gather data to substantially enhance the current knowledge about factor concentrate pharmacokinetics and sources of its variability in target populations. ClinicalTrials.gov NCT02061072; https://clinicaltrials.gov/ct2/show/NCT02061072 (Archived by WebCite at http://www.webcitation.org/6mRK9bKP6). ©Alfonso Iorio, Arun Keepanasseril, Gary Foster, Tamara Navarro-Ruan, Alanna McEneny-King, Andrea N Edginton, Lehana Thabane. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 15.12.2016.

  16. Web Transfer Over Satellites Being Improved

    NASA Technical Reports Server (NTRS)

    Allman, Mark

    1999-01-01

    Extensive research conducted by NASA Lewis Research Center's Satellite Networks and Architectures Branch and the Ohio University has demonstrated performance improvements in World Wide Web transfers over satellite-based networks. The use of a new version of the Hypertext Transfer Protocol (HTTP) reduced the time required to load web pages over a single Transmission Control Protocol (TCP) connection traversing a satellite channel. However, an older technique of simultaneously making multiple requests of a given server has been shown to provide even faster transfer time. Unfortunately, the use of multiple simultaneous requests has been shown to be harmful to the network in general. Therefore, we are developing new mechanisms for the HTTP protocol which may allow a single request at any given time to perform as well as, or better than, multiple simultaneous requests. In the course of study, we also demonstrated that the time for web pages to load is at least as short via a satellite link as it is via a standard 28.8-kbps dialup modem channel. This demonstrates that satellites are a viable means of accessing the Internet.

  17. The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.

    2010-12-01

    Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.

  18. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  19. WEB Services Networks and Technological Hybrids — The Integration Challenges of WAN Distributed Computing for ASP Providers

    NASA Astrophysics Data System (ADS)

    Mroczkiewicz, Pawel

    A necessity of integration of both information systems and office software existing in organizations has had a long history. The beginning of this kind of solutions reaches back to the old generation of network protocols called EDI (Electronic Data Interchange) and EDIFACT standard, which was initiated in 1988 and has dynamically evolved ever since (S. Michalski, M. Suskiewicz, 1995). The mentioned protocol was usually used for converting documents into natural formats processed by applications. It caused problems with binary files and, furthermore, the communication mechanisms had to be modified each time new documents or applications were added. When we compare EDI with the previously used communication mechanisms, EDI was a great step forward as it was the first, big scale attempt to define standards of data interchange between the applications in business transactions (V. Leyland, 1995, p. 47).

  20. Integrating sequence and structural biology with DAS

    PubMed Central

    Prlić, Andreas; Down, Thomas A; Kulesha, Eugene; Finn, Robert D; Kähäri, Andreas; Hubbard, Tim JP

    2007-01-01

    Background The Distributed Annotation System (DAS) is a network protocol for exchanging biological data. It is frequently used to share annotations of genomes and protein sequence. Results Here we present several extensions to the current DAS 1.5 protocol. These provide new commands to share alignments, three dimensional molecular structure data, add the possibility for registration and discovery of DAS servers, and provide a convention how to provide different types of data plots. We present examples of web sites and applications that use the new extensions. We operate a public registry of DAS sources, which now includes entries for more than 250 distinct sources. Conclusion Our DAS extensions are essential for the management of the growing number of services and exchange of diverse biological data sets. In addition the extensions allow new types of applications to be developed and scientific questions to be addressed. The registry of DAS sources is available at PMID:17850653

  1. A New Cloud Architecture of Virtual Trusted Platform Modules

    NASA Astrophysics Data System (ADS)

    Liu, Dongxi; Lee, Jack; Jang, Julian; Nepal, Surya; Zic, John

    We propose and implement a cloud architecture of virtual Trusted Platform Modules (TPMs) to improve the usability of TPMs. In this architecture, virtual TPMs can be obtained from the TPM cloud on demand. Hence, the TPM functionality is available for applications that do not have physical TPMs in their local platforms. Moreover, the TPM cloud allows users to access their keys and data in the same virtual TPM even if they move to untrusted platforms. The TPM cloud is easy to access for applications in different languages since cloud computing delivers services in standard protocols. The functionality of the TPM cloud is demonstrated by applying it to implement the Needham-Schroeder public-key protocol for web authentications, such that the strong security provided by TPMs is integrated into high level applications. The chain of trust based on the TPM cloud is discussed and the security properties of the virtual TPMs in the cloud is analyzed.

  2. The impact of web services at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.

    2015-12-01

    The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability will leverage the DMC's MUSTANG project to select data based on data quality measurements. Within five years, the DMC's web services have proven to be a robust and flexible platform that enables continued growth for the DMC. We expect continued enhancements and adoption of web services.

  3. Sensor Alerting Capability

    NASA Astrophysics Data System (ADS)

    Henriksson, Jakob; Bermudez, Luis; Satapathy, Goutam

    2013-04-01

    There is a large amount of sensor data generated today by various sensors, from in-situ buoys to mobile underwater gliders. Providing sensor data to the users through standardized services, language and data model is the promise of OGC's Sensor Web Enablement (SWE) initiative. As the amount of data grows it is becoming difficult for data providers, planners and managers to ensure reliability of data and services and to monitor critical data changes. Intelligent Automation Inc. (IAI) is developing a net-centric alerting capability to address these issues. The capability is built on Sensor Observation Services (SOSs), which is used to collect and monitor sensor data. The alerts can be configured at the service level and at the sensor data level. For example it can alert for irregular data delivery events or a geo-temporal statistic of sensor data crossing a preset threshold. The capability provides multiple delivery mechanisms and protocols, including traditional techniques such as email and RSS. With this capability decision makers can monitor their assets and data streams, correct failures or be alerted about a coming phenomena.

  4. caCORE: a common infrastructure for cancer informatics.

    PubMed

    Covitz, Peter A; Hartel, Frank; Schaefer, Carl; De Coronado, Sherri; Fragoso, Gilberto; Sahni, Himanso; Gustafson, Scott; Buetow, Kenneth H

    2003-12-12

    Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture, data sources and APIs. Updated information appears on a regular basis on the caCORE web site (http://ncicb.nci.nih.gov/core).

  5. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications

    PubMed Central

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-01-01

    Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453

  6. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications.

    PubMed

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-11-19

    Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.

  7. Web service discovery among large service pools utilising semantic similarity and clustering

    NASA Astrophysics Data System (ADS)

    Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli

    2017-03-01

    With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.

  8. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.

  9. Pragmatic Computing - A Semiotic Perspective to Web Services

    NASA Astrophysics Data System (ADS)

    Liu, Kecheng

    The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.

  10. U-Science (Invited)

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2009-12-01

    The emergence of e-Science over the past decade as a paradigm for Internet-based science was an inevitable evolution of science that built upon the web protocols and access patterns that were prevalent at that time, including Web Services, XML-based information exchange, machine-to-machine communication, service registries, the Grid, and distributed data. We now see a major shift in web behavior patterns to social networks, user-provided content (e.g., tags and annotations), ubiquitous devices, user-centric experiences, and user-led activities. The inevitable accrual of these social networking patterns and protocols by scientists and science projects leads to U-Science as a new paradigm for online scientific research (i.e., ubiquitous, user-led, untethered, You-centered science). U-Science applications include components from semantic e-science (ontologies, taxonomies, folksonomies, tagging, annotations, and classification systems), which is much more than Web 2.0-based science (Wikis, blogs, and online environments like Second Life). Among the best examples of U-Science are Citizen Science projects, including Galaxy Zoo, Stardust@Home, Project Budburst, Volksdata, CoCoRaHS (the Community Collaborative Rain, Hail and Snow network), and projects utilizing Volunteer Geographic Information (VGI). There are also scientist-led projects for scientists that engage a wider community in building knowledge through user-provided content. Among the semantic-based U-Science projects for scientists are those that specifically enable user-based annotation of scientific results in databases. These include the Heliophysics Knowledgebase, BioDAS, WikiProteins, The Entity Describer, and eventually AstroDAS. Such collaborative tagging of scientific data addresses several petascale data challenges for scientists: how to find the most relevant data, how to reuse those data, how to integrate data from multiple sources, how to mine and discover new knowledge in large databases, how to represent and encode the new knowledge, and how to curate the discovered knowledge. This talk will address the emergence of U-Science as a type of Semantic e-Science, and will explore challenges, implementations, and results. Semantic e-Science and U-Science applications and concepts will be discussed within the context of one particular implementation (AstroDAS: Astronomy Distributed Annotation System) and its applicability to petascale science projects such as the LSST (Large Synoptic Survey Telescope), coming online within the next few years.

  11. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  12. A lightweight Web of Things Open Platform to facilitate context data management and personalized healthcare services creation.

    PubMed

    Corredor, Iván; Metola, Eduardo; Bernardos, Ana M; Tarrío, Paula; Casar, José R

    2014-04-29

    In the last few years, many health monitoring systems have been designed to fullfil the needs of a large range of scenarios. Although many of those systems provide good ad hoc solutions, most of them lack of mechanisms that allow them to be easily reused. This paper is focused on describing an open platform, the micro Web of Things Open Platform (µWoTOP), which has been conceived to improve the connectivity and reusability of context data to deliver different kinds of health, wellness and ambient home care services. µWoTOP is based on a resource-oriented architecture which may be embedded in mobile and resource constrained devices enabling access to biometric, ambient or activity sensors and actuator resources through uniform interfaces defined according to a RESTful fashion. Additionally, µWoTOP manages two communication modes which allow delivering user context information according to different methods, depending on the requirements of the consumer application. It also generates alert messages based on standards related to health care and risk management, such as the Common Alerting Protocol, in order to make its outputs compatible with existing systems.

  13. Using ZFIN: Data Types, Organization, and Retrieval.

    PubMed

    Van Slyke, Ceri E; Bradford, Yvonne M; Howe, Douglas G; Fashena, David S; Ramachandran, Sridhar; Ruzicka, Leyla

    2018-01-01

    The Zebrafish Model Organism Database (ZFIN; zfin.org) was established in 1994 as the primary genetic and genomic resource for the zebrafish research community. Some of the earliest records in ZFIN were for people and laboratories. Since that time, services and data types provided by ZFIN have grown considerably. Today, ZFIN provides the official nomenclature for zebrafish genes, mutants, and transgenics and curates many data types including gene expression, phenotypes, Gene Ontology, models of human disease, orthology, knockdown reagents, transgenic constructs, and antibodies. Ontologies are used throughout ZFIN to structure these expertly curated data. An integrated genome browser provides genomic context for genes, transgenics, mutants, and knockdown reagents. ZFIN also supports a community wiki where the research community can post new antibody records and research protocols. Data in ZFIN are accessible via web pages, download files, and the ZebrafishMine (zebrafishmine.org), an installation of the InterMine data warehousing software. Searching for data at ZFIN utilizes both parameterized search forms and a single box search for searching or browsing data quickly. This chapter aims to describe the primary ZFIN data and services, and provide insight into how to use and interpret ZFIN searches, data, and web pages.

  14. A Lightweight Web of Things Open Platform to Facilitate Context Data Management and Personalized Healthcare Services Creation

    PubMed Central

    Corredor, Iván; Metola, Eduardo; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2014-01-01

    In the last few years, many health monitoring systems have been designed to fullfil the needs of a large range of scenarios. Although many of those systems provide good ad hoc solutions, most of them lack of mechanisms that allow them to be easily reused. This paper is focused on describing an open platform, the micro Web of Things Open Platform (µWoTOP), which has been conceived to improve the connectivity and reusability of context data to deliver different kinds of health, wellness and ambient home care services. µWoTOP is based on a resource-oriented architecture which may be embedded in mobile and resource constrained devices enabling access to biometric, ambient or activity sensors and actuator resources through uniform interfaces defined according to a RESTful fashion. Additionally, µWoTOP manages two communication modes which allow delivering user context information according to different methods, depending on the requirements of the consumer application. It also generates alert messages based on standards related to health care and risk management, such as the Common Alerting Protocol, in order to make its outputs compatible with existing systems. PMID:24785542

  15. Electronic Clinical Trial Protocol Distribution via the World-Wide Web

    PubMed Central

    Afrin, Lawrence B.; Kuppuswamy, Valarmathi; Slater, Barbara; Stuart, Robert K.

    1997-01-01

    Clinical trials today typically are inefficient, paper-based operations. Poor community physician awareness of available trials and difficult referral mechanisms also contribute to poor accrual. The Physicians Research Network (PRN) web was developed for more efficient trial protocol distribution and eligibility inquiries. The Medical University of South Carolina's Hollings Cancer Center trials program and two community oncology practices served as a testbed. In 581 man-hours over 18 months, 147 protocols were loaded into PRN. The trials program eliminated all protocol hardcopies except the masters, reduced photocopier use 59%, and saved 1.0 full-time equivalents (FTE), but 1.0 FTE was needed to manage PRN. There were no known security breaches, downtime, or content-related problems. Therefore, PRN is a paperless, user-preferred, reliable, secure method for distributing protocols and reducing distribution errors and delays because only a single copy of each protocol is maintained. Furthermore, PRN is being extended to serve other aspects of trial operations. PMID:8988471

  16. QoS measurement of workflow-based web service compositions using Colored Petri net.

    PubMed

    Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra

    2014-01-01

    Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.

  17. Enhancing UCSF Chimera through web services.

    PubMed

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. The SOOS Data Portal, providing access to Southern Oceans data

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Finney, Kim; Blain, Peter; Taylor, Fiona; Newman, Louise; Meredith, Mike; Schofield, Oscar

    2013-04-01

    The Southern Ocean Observing System (SOOS) is an international initiative to enhance, coordinate and expand the strategic observations of the Southern Oceans that are required to address key scientific and societal challenges. A key component of SOOS will be the creation and maintenance of a Southern Ocean Data Portal to provide improved access to historical and ongoing data (Schofield et al., 2012, Eos, Vol. 93, No. 26, pp 241-243). The scale of this effort will require strong leveraging of existing data centres, new cyberinfrastructure development efforts, and defined data collection, quality control, and archiving procedures across the international community. The task of assembling the SOOS data portal is assigned to the SOOS Data Management Sub-Committee. The information infrastructure chosen for the SOOS data portal is based on the Australian Ocean Data Network (AODN, http://portal.aodn.org.au). The AODN infrastructure is built on open-source tools and the use of international standards ensures efficiency of data exchange and interoperability between contributing systems. OGC standard web services protocols are used for serving of data via the internet. These include Web Map Service (WMS) for visualisation, Web Feature Service (WFS) for data download, and Catalogue Service for Web (CSW) for catalogue exchange. The portal offers a number of tools to access and visualize data: - a Search link to the metadata catalogue enables search and discovery by simple text search, by geographic area, temporal extent, keyword, parameter, organisation, or by any combination of these, allowing users to gain access to further information and/or the data for download. Also, searches can be restricted to items which have either data to download, or attached map layers, or both - a Map interface for discovery and display of data, with the ability to change the style and opacity of layers, add additional data layers via OGC Web Map Services, view animated timeseries datastreams - data can be easily accessed and downloaded including directly from OPeNDAP/THREDDS servers. The SOOS data portal (http://soos.aodn.org.au/soos) aims to make access to Southern Ocean data a simple process and the initial layout classifies data into six themes - Heat and Freshwater; Circulation; Ice-sheets and Sea level; Carbon; Sea-ice; and Ecosystems, with the ability to integrate layers between themes. The portal is in its infancy (pilot launched January 2013) with a limited number of datasets available; however, the number of datasets is expected to grow rapidly as the international community becomes fully engaged.

  19. MR efficiency using automated MRI-desktop eProtocol

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Xu, Yanzhe; Panda, Anshuman; Zhang, Min; Hanson, James; Su, Congzhe; Wu, Teresa; Pavlicek, William; James, Judy R.

    2017-03-01

    MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort. The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and Interventional Radiology and different vendors for imaging protocol management.

  20. User Needs of Digital Service Web Portals: A Case Study

    ERIC Educational Resources Information Center

    Heo, Misook; Song, Jung-Sook; Seol, Moon-Won

    2013-01-01

    The authors examined the needs of digital information service web portal users. More specifically, the needs of Korean cultural portal users were examined as a case study. The conceptual framework of a web-based portal is that it is a complex, web-based service application with characteristics of information systems and service agents. In…

  1. Use Cases for Combining Web Services with ArcPython Tools for Enabling Quality Control of Land Remote Sensing Data Products.

    NASA Astrophysics Data System (ADS)

    Krehbiel, C.; Maiersperger, T.; Friesz, A.; Harriman, L.; Quenzer, R.; Impecoven, K.

    2016-12-01

    Three major obstacles facing big Earth data users include data storage, management, and analysis. As the amount of satellite remote sensing data increases, so does the need for better data storage and management strategies to exploit the plethora of data now available. Standard GIS tools can help big Earth data users whom interact with and analyze increasingly large and diverse datasets. In this presentation we highlight how NASA's Land Processes Distributed Active Archive Center (LP DAAC) is tackling these big Earth data challenges. We provide a real life use case example to describe three tools and services provided by the LP DAAC to more efficiently exploit big Earth data in a GIS environment. First, we describe the Open-source Project for a Network Data Access Protocol (OPeNDAP), which calls to specific data, minimizing the amount of data that a user downloads and improves the efficiency of data downloading and processing. Next, we cover the LP DAAC's Application for Extracting and Exploring Analysis Ready Samples (AppEEARS), a web application interface for extracting and analyzing land remote sensing data. From there, we review an ArcPython toolbox that was developed to provide quality control services to land remote sensing data products. Locating and extracting specific subsets of larger big Earth datasets improves data storage and management efficiency for the end user, and quality control services provides a straightforward interpretation of big Earth data. These tools and services are beneficial to the GIS user community in terms of standardizing workflows and improving data storage, management, and analysis tactics.

  2. A snapshot of 3649 Web-based services published between 1994 and 2017 shows a decrease in availability after 2 years.

    PubMed

    Osz, Ágnes; Pongor, Lorinc Sándor; Szirmai, Danuta; Gyorffy, Balázs

    2017-12-08

    The long-term availability of online Web services is of utmost importance to ensure reproducibility of analytical results. However, because of lack of maintenance following acceptance, many servers become unavailable after a short period of time. Our aim was to monitor the accessibility and the decay rate of published Web services as well as to determine the factors underlying trends changes. We searched PubMed to identify publications containing Web server-related terms published between 1994 and 2017. Automatic and manual screening was used to check the status of each Web service. Kruskall-Wallis, Mann-Whitney and Chi-square tests were used to evaluate various parameters, including availability, accessibility, platform, origin of authors, citation, journal impact factor and publication year. We identified 3649 publications in 375 journals of which 2522 (69%) were currently active. Over 95% of sites were running in the first 2 years, but this rate dropped to 84% in the third year and gradually sank afterwards (P < 1e-16). The mean half-life of Web services is 10.39 years. Working Web services were published in journals with higher impact factors (P = 4.8e-04). Services published before the year 2000 received minimal attention. The citation of offline services was less than for those online (P = 0.022). The majority of Web services provide analytical tools, and the proportion of databases is slowly decreasing. Conclusions. Almost one-third of Web services published to date went out of service. We recommend continued support of Web-based services to increase the reproducibility of published results. © The Author 2017. Published by Oxford University Press.

  3. On Representative Spaceflight Instrument and Associated Instrument Sensor Web Framework

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Patel, Umeshkumar; Vootukuru, Meg

    2007-01-01

    Sensor Web-based adaptation and sharing of space flight mission resources, including those of the Space-Ground and Control-User communication segment, could greatly benefit from utilization of heritage Internet Protocols and devices applied for Spaceflight (SpaceIP). This had been successfully demonstrated by a few recent spaceflight experiments. However, while terrestrial applications of Internet protocols are well developed and understood (mostly due to billions of dollars in investments by the military and industry), the spaceflight application of Internet protocols is still in its infancy. Progress in the developments of SpaceIP-enabled instrument components will largely determine the SpaceIP utilization of those investments and acceptance in years to come. Likewise SpaceIP, the development of commercial real-time and instrument colocated computational resources, data compression and storage, can be enabled on-board a spacecraft and, in turn, support a powerful application to Sensor Web-based design of a spaceflight instrument. Sensor Web-enabled reconfiguration and adaptation of structures for hardware resources and information systems will commence application of Field Programmable Arrays (FPGA) and other aerospace programmable logic devices for what this technology was intended. These are a few obvious potential benefits of Sensor Web technologies for spaceflight applications. However, they are still waiting to be explored. This is because there is a need for a new approach to spaceflight instrumentation in order to make these mature sensor web technologies applicable for spaceflight. In this paper we present an approach in developing related and enabling spaceflight instrument-level technologies based on the new concept of a representative spaceflight Instrument Sensor Web (ISW).

  4. Analysis Tool Web Services from the EMBL-EBI.

    PubMed

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  5. Analysis Tool Web Services from the EMBL-EBI

    PubMed Central

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-01-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338

  6. Biological Web Service Repositories Review.

    PubMed

    Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F

    2017-05-01

    Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  7. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed

    Halub, L P

    1999-07-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.

  8. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed Central

    Halub, L P

    1999-01-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services. PMID:10427423

  9. MedlinePlus Connect: How it Works

    MedlinePlus

    ... it looks depends on how it is implemented. Web Application The Web application returns a formatted response ... for more examples of Web Application response pages. Web Service The MedlinePlus Connect REST-based Web service ...

  10. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    NASA Astrophysics Data System (ADS)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display them all together in a unitary manner using the same visual structure [4]. Addressing the High Power Computation requirements that are met while processing environmental data, our platform can be easily extended through tools that connect to GRID infrastructures, WCS web services, Bashyt API (for creating specialized hydrological reports) or any other specialized services (ex. graphics cluster visualization) that can be reached over the Internet. At run time, on the trainee's computer each tool is launched in an asynchronous running mode and connects to the data source that has been established by the teacher, retrieving and displaying the information to the user. The data transfer is accomplished directly between the trainee's computer and the corresponding services (e.g. OGC, Bashyt API, etc.) without passing through the core server platform. In this manner, the eGLE application can provide better and more responsive connections to a large number of users.

  11. Unifying Access to National Hydrologic Data Repositories via Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.

    2006-12-01

    The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of WaterOneFlow web services is available. More information about the ongoing development of CUAHSI HIS is available from http://www.cuahsi.org/his/.

  12. A Privacy Access Control Framework for Web Services Collaboration with Role Mechanisms

    NASA Astrophysics Data System (ADS)

    Liu, Linyuan; Huang, Zhiqiu; Zhu, Haibin

    With the popularity of Internet technology, web services are becoming the most promising paradigm for distributed computing. This increased use of web services has meant that more and more personal information of consumers is being shared with web service providers, leading to the need to guarantee the privacy of consumers. This paper proposes a role-based privacy access control framework for Web services collaboration, it utilizes roles to specify the privacy privileges of services, and considers the impact on the reputation degree of the historic experience of services in playing roles. Comparing to the traditional privacy access control approaches, this framework can make the fine-grained authorization decision, thus efficiently protecting consumers' privacy.

  13. An integrated healthcare information system for end-to-end standardized exchange and homogeneous management of digital ECG formats.

    PubMed

    Trigo, Jesús Daniel; Martínez, Ignacio; Alesanco, Alvaro; Kollmann, Alexander; Escayola, Javier; Hayn, Dieter; Schreier, Günter; García, José

    2012-07-01

    This paper investigates the application of the enterprise information system (EIS) paradigm to standardized cardiovascular condition monitoring. There are many specifications in cardiology, particularly in the ECG standardization arena. The existence of ECG formats, however, does not guarantee the implementation of homogeneous, standardized solutions for ECG management. In fact, hospital management services need to cope with various ECG formats and, moreover, several different visualization applications. This heterogeneity hampers the normalization of integrated, standardized healthcare information systems, hence the need for finding an appropriate combination of ECG formats and a suitable EIS-based software architecture that enables standardized exchange and homogeneous management of ECG formats. Determining such a combination is one objective of this paper. The second aim is to design and develop the integrated healthcare information system that satisfies the requirements posed by the previous determination. The ECG formats selected include ISO/IEEE11073, Standard Communications Protocol for Computer-Assisted Electrocardiography, and an ECG ontology. The EIS-enabling techniques and technologies selected include web services, simple object access protocol, extensible markup language, or business process execution language. Such a selection ensures the standardized exchange of ECGs within, or across, healthcare information systems while providing modularity and accessibility.

  14. SIDECACHE: Information access, management and dissemination framework for web services.

    PubMed

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  15. Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)

    NASA Astrophysics Data System (ADS)

    Bhattacharya, D.; M., M.

    2016-06-01

    Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.

  16. Description and testing of the Geo Data Portal: Data integration framework and Web processing services for environmental science collaboration

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.

    2011-01-01

    Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.

  17. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  18. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.

  19. Challenges in Visualizing Satellite Level 2 Atmospheric Data with GIS approach

    NASA Astrophysics Data System (ADS)

    Wei, J. C.; Yang, W.; Zhao, P.; Pham, L.; Meyer, D. J.

    2017-12-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. One way to help users better understand the satellite data is to provide data along with `Images', including accurate pixel coverage area delineation, and science team recommended quality screening for individual geophysical parameters. However, there are challenges of visualizing remote sensed non-gridded products: (1) different geodetics of space-borne instruments (2) data often arranged in "along-track" and "across-track" axes (3) spatially and temporally continuous data chunked into granule files: data for a portion (or all) of a satellite orbit (4) no general rule of resampling or interpolations to a grid (5) geophysical retrieval only based on pixel center location without shape information. In this presentation, we will unravel a new Goddard Earth Sciences Data and Information Services Center (GES DISC) Level 2 (L2) visualization on-demand service. The service's front end provides various visualization and data accessing capabilities, such as overlay and swipe of multiply variables and subset and download of data in different formats. The backend of the service consists of Open Geospatial Consortium (OGC) standard-compliant Web Mapping Service (WMS) and Web Coverage Service. The infrastructure allows inclusion of outside data sources served in OGC compliant protocols and allows other interoperable clients, such as ArcGIS clients, to connect to our L2 WCS/WMS.

  20. Challenges in Obtaining and Visualizing Satellite Level 2 Data in GIS

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer C.; Yang, Wenli; Zhao, Peisheng; Pham, Long; Meyer, David J.

    2017-01-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. One way to help users better understand the satellite data is to provide data along with Images, including accurate pixel coverage area delineation, and science team recommended quality screening for individual geophysical parameters. However, there are challenges of visualizing remote sensed non-gridded products: (1) different geodetics of space-borne instruments (2) data often arranged in a long-track� and a cross-track� axes (3) spatially and temporally continuous data chunked into granule files: data for a portion (or all) of a satellite orbit (4) no general rule of resampling or interpolations to a grid (5) geophysical retrieval only based on pixel center location without shape information. In this presentation, we will unravel a new Goddard Earth Sciences Data and Information Services Center (GES DISC) Level 2 (L2) visualization on-demand service. The service's front end provides various visualization and data accessing capabilities, such as overlay and swipe of multiply variables and subset and download of data in different formats. The backend of the service consists of Open Geospatial Consortium (OGC) standard-compliant Web Mapping Service (WMS) and Web Coverage Service. The infrastructure allows inclusion of outside data sources served in OGC compliant protocols and allows other interoperable clients, such as ArcGIS clients, to connect to our L2 WCS/WMS.

  1. A Web-Based Intervention for Health Professionals and Patients to Decrease Cardiovascular Risk Attributable to Physical Inactivity: Development Process

    PubMed Central

    2012-01-01

    Background Patients with cardiovascular risk factors can reduce their risk of cardiovascular disease by increasing their physical activity and their physical fitness. According to the guidelines for cardiovascular risk management, health professionals should encourage their patients to engage in physical activity. Objective In this paper, we provide insight regarding the systematic development of a Web-based intervention for both health professionals and patients with cardiovascular risk factors using the development method Intervention Mapping. The different steps of Intervention Mapping are described to open up the “black box” of Web-based intervention development and to support future Web-based intervention development. Methods The development of the Professional and Patient Intention and Behavior Intervention (PIB2 intervention) was initiated with a needs assessment for both health professionals (ie, physiotherapy and nursing) and their patients. We formulated performance and change objectives and, subsequently, theory- and evidence-based intervention methods and strategies were selected that were thought to affect the intention and behavior of health professionals and patients. The rationale of the intervention was based on different behavioral change methods that allowed us to describe the scope and sequence of the intervention and produced the Web-based intervention components. The Web-based intervention consisted of 5 modules, including individualized messages and self-completion forms, and charts and tables. Results The systematic and planned development of the PIB2 intervention resulted in an Internet-delivered behavior change intervention. The intervention was not developed as a substitute for face-to-face contact between professionals and patients, but as an application to complement and optimize health services. The focus of the Web-based intervention was to extend professional behavior of health care professionals, as well as to improve the risk-reduction behavior of patients with cardiovascular risk factors. Conclusions The Intervention Mapping protocol provided a systematic method for developing the intervention and each intervention design choice was carefully thought-out and justified. Although it was not a rapid or an easy method for developing an intervention, the protocol guided and directed the development process. The application of evidence-based behavior change methods used in our intervention offers insight regarding how an intervention may change intention and health behavior. The Web-based intervention appeared feasible and was implemented. Further research will test the effectiveness of the PIB2 intervention. Trial Registration Dutch Trial Register, Trial ID: ECP-92 PMID:23612470

  2. Development of a Coordinated National Soil Moisture Network: A Pilot Study

    NASA Astrophysics Data System (ADS)

    Lucido, J. M.; Quiring, S. M.; Verdin, J. P.; Pulwarty, R. S.; Baker, B.; Cosgrove, B.; Escobar, V. M.; Strobel, M.

    2014-12-01

    Soil moisture data is critical for accurate drought prediction, flood forecasting, climate modeling, prediction of crop yields and water budgeting. However, soil moisture data are collected by many agencies and organizations in the United States using a variety of instruments and methods for varying applications. These data are often distributed and represented in disparate formats, posing significant challenges for use. In recognition of these challenges, the President's Climate Action Plan articulated the need for a coordinated national soil moisture network. In response to this action plan, a team led by the National Integrated Drought Information System has begun to develop a framework for this network and has instituted a proof-of-concept pilot study. This pilot is located in the south-central plains of the US, and will serve as a reference architecture for the requisite data systems and inform the design of the national network. The pilot comprises both in-situ and modeled soil moisture datasets (historical and real-time) and will serve the following use cases: operational drought monitoring, experimental land surface modeling, and operational hydrological modeling. The pilot will be implemented using a distributed network design in order to serve dispersed data in real-time directly from data providers. Standard service protocols will be used to enable future integration with external clients. The pilot network will additionally contain a catalog of data sets and web service endpoints, which will be used to broker web service calls. A mediation and aggregation service will then intelligently request, compile, and transform the distributed datasets from their native formats into a standardized output. This mediation framework allows data to be hosted and maintained locally by the data owners while simplifying access through a single service interface. These data services will then be used to create visualizations, for example, views of the current soil moisture conditions compared to historical baselines via a map-based web application. This talk will comprise an overview of the pilot design and implementation, a discussion of strategies for integrating in-situ and modeled soil moisture data sets as well as lessons learned during the course of the pilot.

  3. Managing the Web-Enhanced Geographic Information Service.

    ERIC Educational Resources Information Center

    Stephens, Denise

    1997-01-01

    Examines key management issues involved in delivering geographic information services on the World Wide Web, using the Geographic Information Center (GIC) program at the University of Virginia Library as a reference. Highlights include integrating the Web into services; building collections for Web delivery; and evaluating spatial information…

  4. Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things

    PubMed Central

    Jazayeri, Mohammad Ali; Liang, Steve H. L.; Huang, Chih-Yuan

    2015-01-01

    Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision. PMID:26402683

  5. Automated geospatial Web Services composition based on geodata quality requirements

    NASA Astrophysics Data System (ADS)

    Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael

    2012-10-01

    Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.

  6. Study protocol for a pragmatic randomised controlled trial evaluating efficacy of a smoking cessation e-‘Tabac Info Service’: ee-TIS trial

    PubMed Central

    Cambon, L; Bergman, P; Le Faou, Al; Vincent, I; Le Maitre, B; Pasquereau, A; Arwidson, P; Thomas, D; Alla, F

    2017-01-01

    Introduction A French national smoking cessation service, Tabac Info Service, has been developed to provide an adapted quitline and a web and mobile application involving personalised contacts (eg, questionnaires, advice, activities, messages) to support smoking cessation. This paper presents the study protocol of the evaluation of the application (e-intervention Tabac Info Service (e-TIS)). The primary objective is to assess the efficacy of e-TIS. The secondary objectives are to (1) describe efficacy variations with regard to users' characteristics, (2) analyse mechanisms and contextual conditions of e-TIS efficacy. Methods and analyses The study design is a two-arm pragmatic randomised controlled trial including a process evaluation with at least 3000 participants randomised to the intervention or to the control arm (current practices). Inclusion criteria are: aged 18 years or over, current smoker, having completed the online consent forms, possessing a mobile phone with android or apple systems and using mobile applications, wanting to stop smoking sooner or later. The primary outcome is the point prevalence abstinence of 7 days at 6 months later. Data will be analysed in intention to treat (primary) and per protocol analyses. A logistic regression will be carried out to estimate an OR (95% CI) for efficacy. A multivariate multilevel analysis will explore the influence on results of patients' characteristics (sex, age, education and socioprofessional levels, dependency, motivation, quit experiences) and contextual factors, conditions of use, behaviour change techniques. Ethics and dissemination The study protocol was reviewed by the ethical and deontological institutional review board of the French Institute for Public Health Surveillance on 18 April 2016. The findings of this study will allow us to characterise the efficacy of e-TIS and conditions of its efficacy. These findings will be disseminated through peer-reviewed articles. Trial registration number NCT02841683; Pre-results. PMID:28237958

  7. BioCatalogue: a universal catalogue of web services for the life sciences

    PubMed Central

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.

    2010-01-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378

  8. BioCatalogue: a universal catalogue of web services for the life sciences.

    PubMed

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A

    2010-07-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.

  9. Customer Decision Making in Web Services with an Integrated P6 Model

    NASA Astrophysics Data System (ADS)

    Sun, Zhaohao; Sun, Junqing; Meredith, Grant

    Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.

  10. Measuring Spontaneous and Instructed Evaluation Processes during Web Search: Integrating Concurrent Thinking-Aloud Protocols and Eye-Tracking Data

    ERIC Educational Resources Information Center

    Gerjets, Peter; Kammerer, Yvonne; Werner, Benita

    2011-01-01

    Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…

  11. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    PubMed Central

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  12. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  13. The Organizational Role of Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2011-01-01

    The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…

  14. 78 FR 60303 - Agency Information Collection Activities: Online Survey of Web Services Employers; New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Web site at http://www.Regulations.gov under e-Docket ID number USCIS-2013- 0003. When submitting... information collection. (2) Title of the Form/Collection: Online Survey of Web Services Employers. (3) Agency...

  15. 78 FR 42537 - Agency Information Collection Activities: Online Survey of Web Services Employers; New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Information Collection: New information collection. (2) Title of the Form/Collection: Online Survey of Web... sector. It is necessary that USCIS obtains data on the E-Verify Program Web Services. Gaining an...

  16. Protecting Database Centric Web Services against SQL/XPath Injection Attacks

    NASA Astrophysics Data System (ADS)

    Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique

    Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.

  17. Optimising text messaging to improve adherence to web-based smoking cessation treatment: a randomised control trial protocol.

    PubMed

    Graham, Amanda L; Jacobs, Megan A; Cohn, Amy M; Cha, Sarah; Abroms, Lorien C; Papandonatos, George D; Whittaker, Robyn

    2016-03-30

    Millions of smokers use the Internet for smoking cessation assistance each year; however, most smokers engage minimally with even the best designed websites. The ubiquity of mobile devices and their effectiveness in promoting adherence in other areas of health behaviour change make them a promising tool to address adherence in Internet smoking cessation interventions. Text messaging is used by most adults, and messages can proactively encourage use of a web-based intervention. Text messaging can also be integrated with an Internet intervention to facilitate the use of core Internet intervention components. We identified four aspects of a text message intervention that may enhance its effectiveness in promoting adherence to a web-based smoking cessation programme: personalisation, integration, dynamic tailoring and message intensity. Phase I will use a two-level full factorial design to test the impact of these four experimental features on adherence to a web-based intervention. The primary outcome is a composite metric of adherence that incorporates general utilisation metrics (eg, logins, page views) and specific feature utilisation shown to predict abstinence. Participants will be N=860 adult smokers who register on an established Internet cessation programme and enrol in its text message programme. Phase II will be a two-arm randomised trial to compare the efficacy of the web-based cessation programme alone and in conjunction with the optimised text messaging intervention on 30-day point prevalence abstinence at 9 months. Phase II participants will be N=600 adult smokers who register to use an established Internet cessation programme and enrol in text messaging. Secondary analyses will explore whether adherence mediates the effect of treatment condition on outcome. This protocol was approved by Chesapeake IRB. We will disseminate study results through peer-reviewed manuscripts and conference presentations related to the methods and design, outcomes and exploratory analyses. NCT02585206. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. A Web service substitution method based on service cluster nets

    NASA Astrophysics Data System (ADS)

    Du, YuYue; Gai, JunJing; Zhou, MengChu

    2017-11-01

    Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.

  19. Network and User-Perceived Performance of Web Page Retrievals

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Allman, Mark; Mallasch, Paul

    1998-01-01

    The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.

  20. Computer Based Collaborative Problem Solving for Introductory Courses in Physics

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina; Lee, Kevin

    2010-03-01

    We discuss collaborative problem solving computer-based recitation style. The course is designed by Lee [1], and the idea was proposed before by Christian, Belloni and Titus [2,3]. The students find the problems on a web-page containing simulations (physlets) and they write the solutions on an accompanying worksheet after discussing it with a classmate. Physlets have the advantage of being much more like real-world problems than textbook problems. We also compare two protocols for web-based instruction using simulations in an introductory physics class [1]. The inquiry protocol allowed students to control input parameters while the worked example protocol did not. We will discuss which of the two methods is more efficient in relation to Scientific Discovery Learning and Cognitive Load Theory. 1. Lee, Kevin M., Nicoll, Gayle and Brooks, Dave W. (2004). ``A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets'', Journal of Science Education and Technology 13, No. 1: 81-88. 2. Christian, W., and Belloni, M. (2001). Physlets: Teaching Physics With Interactive Curricular Material, Prentice Hall, Englewood Cliffs, NJ. 3. Christian,W., and Titus,A. (1998). ``Developing web-based curricula using Java Physlets.'' Computers in Physics 12: 227--232.

  1. Web Services as Public Services: Are We Supporting Our Busiest Service Point?

    ERIC Educational Resources Information Center

    Riley-Huff, Debra A.

    2009-01-01

    This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)

  2. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  3. Comparison: Mediation Solutions of WSMOLX and WebML/WebRatio

    NASA Astrophysics Data System (ADS)

    Zaremba, Maciej; Zaharia, Raluca; Turati, Andrea; Brambilla, Marco; Vitvar, Tomas; Ceri, Stefano

    In this chapter we compare the WSMO/WSML/WSMX andWebML/WebRatio approaches to the SWS-Challenge workshop mediation scenario in terms of the utilized underlying technologies and delivered solutions. In the mediation scenario one partner uses Roset-taNet to define its B2B protocol while the other one operates on a proprietary solution. Both teams shown how these partners could be semantically integrated.

  4. Enhancing Self-Efficacy for Help-Seeking Among Transition-Aged Youth in Postsecondary Settings With Mental Health and/or Substance Use Concerns, Using Crowd-Sourced Online and Mobile Technologies: The Thought Spot Protocol

    PubMed Central

    Abi-Jaoude, Alexxa; Johnson, Andrew; Ferguson, Genevieve; Sanches, Marcos; Levinson, Andrea; Robb, Janine; Heffernan, Olivia; Herzog, Tyson; Chaim, Gloria; Cleverley, Kristin; Eysenbach, Gunther; Henderson, Joanna; S Hoch, Jeffrey; Hollenberg, Elisa; Jiang, Huan; Isaranuwatchai, Wanrudee; Law, Marcus; Sharpe, Sarah; Tripp, Tim; Voineskos, Aristotle

    2016-01-01

    Background Seventy percent of lifetime cases of mental illness emerge prior to age 24. While early detection and intervention can address approximately 70% of child and youth cases of mental health concerns, the majority of youth with mental health concerns do not receive the services they need. Objective The objective of this paper is to describe the protocol for optimizing and evaluating Thought Spot, a Web- and mobile-based platform cocreated with end users that is designed to improve the ability of students to access mental health and substance use services. Methods This project will be conducted in 2 distinct phases, which will aim to (1) optimize the existing Thought Spot electronic health/mobile health intervention through youth engagement, and (2) evaluate the impact of Thought Spot on self-efficacy for mental health help-seeking and health literacy among university and college students. Phase 1 will utilize participatory action research and participatory design research to cocreate and coproduce solutions with members of our target audience. Phase 2 will consist of a randomized controlled trial to test the hypothesis that the Thought Spot intervention will show improvements in intentions for, and self-efficacy in, help-seeking for mental health concerns. Results We anticipate that enhancements will include (1) user analytics and feedback mechanisms, (2) peer mentorship and/or coaching functionality, (3) crowd-sourcing and data hygiene, and (4) integration of evidence-based consumer health and research information. Conclusions This protocol outlines the important next steps in understanding the impact of the Thought Spot platform on the behavior of postsecondary, transition-aged youth students when they seek information and services related to mental health and substance use. PMID:27815232

  5. Practices Regarding Rape-related Pregnancy in U.S. Abortion Care Settings.

    PubMed

    Perry, Rachel; Murphy, Molly; Rankin, Kristin M; Cowett, Allison; Harwood, Bryna

    2016-01-01

    We aimed to explore current practices regarding screening for rape and response to disclosure of rape-related pregnancy in the abortion care setting. We performed a cross-sectional, nonprobability survey of U.S. abortion providers. Individuals were recruited in person and via emailed invitations to professional organization member lists. Questions in this web-based survey pertained to providers' practice setting, how they identify rape-related pregnancy, the availability of support services, and their experiences with law enforcement. Providers were asked their perceptions of barriers to care for women who report rape-related pregnancy. Surveys were completed by 279 providers (21% response rate). Most respondents were female (93.1%), and the majority were physicians in a clinical role (69.4%). One-half (49.8%) reported their practice screens for pregnancy resulting from rape, although fewer (34.8%) reported that screening is the method through which most patients with this history are identified. Most (80.6%) refer women with rape-related pregnancy to support services such as rape crisis centers. Relatively few (19.7%) have a specific protocol for care of women who report rape-related pregnancy. Clinics that screen were 79% more likely to have a protocol for care than centers that do not screen. Although the majority (67.4%) reported barriers to identification of women with rape-related pregnancy, fewer (33.3%) reported barriers to connecting them to support services. Practices for identifying and providing care to women with rape-related pregnancy in the abortion care setting are variable. Further research should address barriers to care provision, as well as identifying protocols for care. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  6. Enhanced Multi-Modal Access to Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lamarra, Norm; Doyle, Richard; Wyatt, Jay

    2003-01-01

    Tomorrow's Interplanetary Network (IPN) will evolve from JPL's Deep-Space Network (DSN) and provide key capabilities to future investigators, such as simplified acquisition of higher-quality science at remote sites and enriched access to these sites. These capabilities could also be used to foster public interest, e.g., by making it possible for students to explore these environments personally, eventually perhaps interacting with a virtual world whose models could be populated by data obtained continuously from the IPN. Our paper looks at JPL's approach to making this evolution happen, starting from improved communications. Evolving space protocols (e.g., today's CCSDS proximity and file-transfer protocols) will provide the underpinning of such communications in the next decades, just as today's rich web was enabled by progress in Internet Protocols starting from the early 1970's (ARPAnet research). A key architectural thrust of this effort is to deploy persistent infrastructure incrementally, using a layered service model, where later higher-layer capabilities (such as adaptive science planning) are enabled by earlier lower-layer services (such as automated routing of object-based messages). In practice, there is also a mind shift needed from an engineering culture raised on point-to-point single-function communications (command uplink, telemetry downlink), to one in which assets are only indirectly accessed, via well-defined interfaces. We are aiming to foster a 'community of access' both among space assets and the humans who control them. This enables appropriate (perhaps eventually optimized) sharing of services and resources to the greater benefit of all participants. We envision such usage to be as automated in the future as using a cell phone is today - with all the steps in creating the real-time link being automated.

  7. An Architecture for Autonomic Web Service Process Planning

    NASA Astrophysics Data System (ADS)

    Moore, Colm; Xue Wang, Ming; Pahl, Claus

    Web service composition is a technology that has received considerable attention in the last number of years. Languages and tools to aid in the process of creating composite Web services have been received specific attention. Web service composition is the process of linking single Web services together in order to accomplish more complex tasks. One area of Web service composition that has not received as much attention is the area of dynamic error handling and re-planning, enabling autonomic composition. Given a repository of service descriptions and a task to complete, it is possible for AI planners to automatically create a plan that will achieve this goal. If however a service in the plan is unavailable or erroneous the plan will fail. Motivated by this problem, this paper suggests autonomous re-planning as a means to overcome dynamic problems. Our solution involves automatically recovering from faults and creating a context-dependent alternate plan. We present an architecture that serves as a basis for the central activities autonomous composition, monitoring and fault handling.

  8. Genevar: a database and Java application for the analysis and visualization of SNP-gene associations in eQTL studies.

    PubMed

    Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T

    2010-10-01

    Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.

  9. Testing the efficacy of web-based cognitive behavioural therapy for adult patients with chronic fatigue syndrome (CBIT): study protocol for a randomized controlled trial.

    PubMed

    Janse, Anthonie; Worm-Smeitink, Margreet; Bussel-Lagarde, José; Bleijenberg, Gijs; Nikolaus, Stephanie; Knoop, Hans

    2015-08-12

    Cognitive behavioural therapy (CBT) is an effective treatment for fatigue and disabilities in patients with chronic fatigue syndrome (CFS). However, treatment capacity is limited. Providing web-based CBT and tailoring the amount of contact with the therapist to the individual needs of the patient may increase the efficiency of the intervention. Web-based CBT for adolescents with CFS has proven to be effective in reducing fatigue and increasing school attendance. In the proposed study the efficacy of a web-based CBT intervention for adult patients with CFS will be explored. Two different formats of web-based CBT will be tested. In the first format named protocol driven feedback, patients report on their progress and receive feedback from a therapist according to a preset schedule. In the second format named support on demand, feedback and support of the therapist is only given when patients ask for it. The primary objective of the study is to determine the efficacy of a web-based CBT intervention on fatigue severity. A randomized clinical trial will be conducted. Two-hundred-forty adults who have been diagnosed with CFS according to the US Centers for Disease Control and Prevention (CDC) consensus criteria will be recruited and randomized to one of three conditions: web-based CBT with protocol driven feedback, web-based CBT with support on demand, or wait list. Feedback will be delivered by therapists specialized in CBT for CFS. Each of the web-based CBT interventions will be compared to a wait list condition with respect to its effect on the primary outcome measure; fatigue severity. Secondary outcome measures are level of disability, physical functioning, psychological distress, and the proportion of patients with clinical significant improvement in fatigue severity. Outcomes will be assessed at baseline and six months post randomization. The web-based CBT formats will be compared with respect to the time therapists need to deliver the intervention. As far as we know this is the first randomized controlled trial (RCT) that evaluates the efficacy of a web-based CBT intervention for adult patients with CFS. NTR4013.

  10. Domain-specific Web Service Discovery with Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Caverlee, J; Liu, L

    2005-02-14

    This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less

  11. Study design to develop and pilot-test a web intervention for partners of military service members with alcohol misuse.

    PubMed

    Osilla, Karen Chan; Pedersen, Eric R; Gore, Kristie; Trail, Thomas; Howard, Stefanie Stern

    2014-09-02

    Alcohol misuse among military service members from the recent conflicts in Iraq and Afghanistan is over two times higher compared to misuse in the civilian population. Unfortunately, in addition to experiencing personal consequences from alcohol misuse, partners and family members of alcohol-misusing service members also suffer in negative ways from their loved one's drinking. These family members represent important catalysts for helping their loved ones identify problem drinking and overcoming the barriers to seeking care. This paper describes the protocol to a pilot study evaluating a 4-session, web-based intervention (WBI) for concerned partners (CPs) of service members with alcohol misuse. The WBI will be adapted from the Community Reinforcement and Family Training (CRAFT) intervention. In the first phase, we will develop and beta-test the WBI with 15-20 CPs. In the second phase, we will randomize CPs to WBI (n = 50) or to delayed-WBI (n = 50) and evaluate the impact of the WBI on CPs' perceptions of service member help-seeking and drinking, as well as the CP's well-being and relationship satisfaction 3 months after the intervention. In the third phase, we will recruit 15-20 service members whose partners have completed the study. We will interview the service members to learn how the CP-focused WBI affected them and to assess whether they would be receptive to a follow-on WBI module to help them. This project has the potential to benefit a large population of military service members who may be disproportionately affected by recent conflicts and whose drinking misuse would otherwise go undetected and untreated. It also develops a new prevention model that does not rely on service members or partners attending a hospital or clinical facility to access care. NCT02073825.

  12. DIgital Alcohol Management ON Demand (DIAMOND) feasibility randomised controlled trial of a web-based intervention to reduce alcohol consumption in people with hazardous and harmful use versus a face-to-face intervention: protocol.

    PubMed

    Hamilton, Fiona L; Hornby, Jo; Sheringham, Jessica; Kerry, Sally; Linke, Stuart; Solmi, Francesca; Ashton, Charlotte; Moore, Kevin; Murray, Elizabeth

    2015-01-01

    "Hazardous and harmful" drinkers make up approximately 23 % of the adult population in England. However, only around 10 % of these people access specialist care, such as face-to-face extended brief treatment in community alcohol services. This may be due to stigma, difficulty accessing services during working hours, a shortage of trained counsellors and limited provision of services in many places. Web-based alcohol treatment programmes may overcome these barriers and may better suit people who are reluctant or unable to attend face-to-face services, but there is a gap in the evidence base for the acceptability, effectiveness and cost-effectiveness of these programmes compared with treatment as usual (TAU) in community alcohol services. This study aims investigate the feasibility of all parts of a randomised controlled trial (RCT) of a psychologically informed web-based alcohol treatment programme called Healthy Living for People who use Alcohol (HeLP-Alcohol) versus TAU in community alcohol services, e.g. recruitment and retention, online data collection methods, and the use and acceptability of the intervention to participants. A feasibility RCT delivered in north London community alcohol services, comparing HeLP-Alcohol with TAU. Potential participants are aged ≥18 years referred or self-referred for hazardous and harmful use of alcohol, without co-morbidities or other complex problems. The main purpose of this study is to demonstrate the feasibility of recruiting participants to the study and will test online methods for collecting baseline demographic and outcome questionnaire data, randomising participants and collecting 3-month follow-up data. The acceptability of this intervention will be measured by recruitment and retention rates, automated log-in data collection and an online service satisfaction questionnaire. The feasibility of using tailored text message, email or phone prompt to maintain engagement with the intervention will also be explored. Results of the study will inform a definitive Phase 3 RCT. Recruitment started on 26 September 2014 and will run for 1 year. The proposed trial will provide data to inform a fully powered non-inferiority effectiveness and cost-effectiveness RCT comparing HeLP-Alcohol with TAU. ISRCTN31789096.

  13. Available, intuitive and free! Building e-learning modules using web 2.0 services.

    PubMed

    Tam, Chun Wah Michael; Eastwood, Anne

    2012-01-01

    E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.

  14. 76 FR 28439 - Submission for OMB Review; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ...; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer... currently valid OMB control number. Proposed Collection: Title: NCI Cancer Genetics Services Directory Web... application form and the Web-based update mailer is to collect information about genetics professionals to be...

  15. General Practitioners' Attitudes Toward a Web-Based Mental Health Service for Adolescents: Implications for Service Design and Delivery.

    PubMed

    Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne

    2018-03-23

    Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic-Kerry, Catherine King, Kathleen O'Moore, Melinda Achilles, Bridianne O'Dea. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 23.03.2018.

  16. PaaS for web applications with OpenShift Origin

    NASA Astrophysics Data System (ADS)

    Lossent, A.; Rodriguez Peon, A.; Wagner, A.

    2017-10-01

    The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.

  17. Automated Web-Based Request Mechanism for Workflow Enhancement in an Academic Customer-Focused Biorepository.

    PubMed

    McDonald, Sandra A; Ryan, Benjamin J; Brink, Amy; Holtschlag, Victoria L

    2012-02-01

    Informatics systems, particularly those that provide capabilities for data storage, specimen tracking, retrieval, and order fulfillment, are critical to the success of biorepositories and other laboratories engaged in translational medical research. A crucial item-one easily overlooked-is an efficient way to receive and process investigator-initiated requests. A successful electronic ordering system should allow request processing in a maximally efficient manner, while also allowing streamlined tracking and mining of request data such as turnaround times and numerical categorizations (user groups, funding sources, protocols, and so on). Ideally, an electronic ordering system also facilitates the initial contact between the laboratory and customers, while still allowing for downstream communications and other steps toward scientific partnerships. We describe here the recently established Web-based ordering system for the biorepository at Washington University Medical Center, along with its benefits for workflow, tracking, and customer service. Because of the system's numerous value-added impacts, we think our experience can serve as a good model for other customer-focused biorepositories, especially those currently using manual or non-Web-based request systems. Our lessons learned also apply to the informatics developers who serve such biobanks.

  18. A Web-based telemedicine system for diabetic retinopathy screening using digital fundus photography.

    PubMed

    Wei, Jack C; Valentino, Daniel J; Bell, Douglas S; Baker, Richard S

    2006-02-01

    The purpose was to design and implement a Web-based telemedicine system for diabetic retinopathy screening using digital fundus cameras and to make the software publicly available through Open Source release. The process of retinal imaging and case reviewing was modeled to optimize workflow and implement use of computer system. The Web-based system was built on Java Servlet and Java Server Pages (JSP) technologies. Apache Tomcat was chosen as the JSP engine, while MySQL was used as the main database and Laboratory of Neuro Imaging (LONI) Image Storage Architecture, from the LONI-UCLA, as the platform for image storage. For security, all data transmissions were carried over encrypted Internet connections such as Secure Socket Layer (SSL) and HyperText Transfer Protocol over SSL (HTTPS). User logins were required and access to patient data was logged for auditing. The system was deployed at Hubert H. Humphrey Comprehensive Health Center and Martin Luther King/Drew Medical Center of Los Angeles County Department of Health Services. Within 4 months, 1500 images of more than 650 patients were taken at Humphrey's Eye Clinic and successfully transferred to King/Drew's Department of Ophthalmology. This study demonstrates an effective architecture for remote diabetic retinopathy screening.

  19. MAPI: a software framework for distributed biomedical applications

    PubMed Central

    2013-01-01

    Background The amount of web-based resources (databases, tools etc.) in biomedicine has increased, but the integrated usage of those resources is complex due to differences in access protocols and data formats. However, distributed data processing is becoming inevitable in several domains, in particular in biomedicine, where researchers face rapidly increasing data sizes. This big data is difficult to process locally because of the large processing, memory and storage capacity required. Results This manuscript describes a framework, called MAPI, which provides a uniform representation of resources available over the Internet, in particular for Web Services. The framework enhances their interoperability and collaborative use by enabling a uniform and remote access. The framework functionality is organized in modules that can be combined and configured in different ways to fulfil concrete development requirements. Conclusions The framework has been tested in the biomedical application domain where it has been a base for developing several clients that are able to integrate different web resources. The MAPI binaries and documentation are freely available at http://www.bitlab-es.com/mapi under the Creative Commons Attribution-No Derivative Works 2.5 Spain License. The MAPI source code is available by request (GPL v3 license). PMID:23311574

  20. Development of a web database portfolio system with PACS connectivity for undergraduate health education and continuing professional development.

    PubMed

    Ng, Curtise K C; White, Peter; McKay, Janice C

    2009-04-01

    Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.

  1. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    PubMed

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  2. 76 FR 14034 - Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ... Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035

  3. Building asynchronous geospatial processing workflows with web services

    NASA Astrophysics Data System (ADS)

    Zhao, Peisheng; Di, Liping; Yu, Genong

    2012-02-01

    Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.

  4. OneGeology Web Services and Portal as a global geological SDI - latest standards and technology

    NASA Astrophysics Data System (ADS)

    Duffy, Tim; Tellez-Arenas, Agnes

    2014-05-01

    The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.

  5. A Worldwide Community of Primary and Secondary Students and Their Teachers Engage in and Contribute to Geoscience Research

    NASA Astrophysics Data System (ADS)

    Sparrow, E. B.; Kopplin, M. R.; Yule, S.

    2009-12-01

    The GLOBE (Global learning and Observations to Benefit the Environment) program is among the most successful long-term citizen scientist programs engaging K-12 students, in-service and pre-service teachers, as well as community members in different areas of geoscience investigations: atmosphere/weather, land cover biology, soils, hydrology, and vegetation phenology. What sustains this multi-nation project is the interest and collaboration among scientists, educators, students and the GLOBE Partnerships that are mostly self-supporting and function in the United States and in a hundred other countries. The GLOBE Program Office in the United States continues to offer, an overall coordinating and leadership function, a website, an infrastructure, management and support for web data entry and access, as well as visualizations, and a much used help desk. In Alaska, GLOBE research and activities are maintained through professional development workshops for educators, continued year-long support for teachers and their students (classroom visits, email, mail and newsletters) including program assessments, funded through federal grants to the University of Alaska Fairbanks. The current earth system science Seasons and Biomes project uses GLOBE protocols as well as newly developed ones to fit the needs of the locale, such as ice freeze-up and break-up seasonality protocols for rivers and lakes in tundra, taiga and other northern biomes, and mosquito phenology protocols for tropical and sub-tropical moist broadleaf forests and other biomes in Asia and Africa, invasive plant species for Africa, and modified plant phenology protocols for temperate deciduous forests in Australia. Students contribute data and use archived data as needed when they conduct geoscience research individually, in small groups or as a class and/or collaboratively with others in schools in other parts of the country and the world.

  6. Comparing Two Web/Mail Mixed-Mode Contact Protocols to a Unimode Mail Survey

    ERIC Educational Resources Information Center

    Newberry, Milton G., III; Israel, Glenn D.

    2017-01-01

    Recent research has shown mixed-mode surveys are advantageous for organizations to use in collecting data. Previous research explored web/mail mode effects for four-contact waves. This study explores the effect of web/mail mixed-mode systems over a series of contacts on the customer satisfaction data from the Florida Cooperative Extension Service…

  7. Research of three level match method about semantic web service based on ontology

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Fang

    2011-10-01

    An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.

  8. Interoperability And Value Added To Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Gasperi, J.

    2012-04-01

    Geospatial web services technology has provided a new means for geospatial data interoperability. Open Geospatial Consortium (OGC) services such as Web Map Service (WMS) to request maps on the Internet, Web Feature Service (WFS) to exchange vectors or Catalog Service for the Web (CSW) to search for geospatialized data have been widely adopted in the Geosciences community in general and in the remote sensing community in particular. These services make Earth Observation data available to a wider range of public users than ever before. The mapshup web client offers an innovative and efficient user interface that takes advantage of the power of interoperability. This presentation will demonstrate how mapshup can be effectively used in the context of natural disasters management.

  9. 39 CFR 3001.12 - Service of documents.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or presiding officer has determined is unable to receive service through the Commission's Web site... presiding officer has determined is unable to receive service through the Commission Web site shall be by... service list for each current proceeding will be available on the Commission's Web site http://www.prc.gov...

  10. A Mobile Phone App to Support Young People in Making Shared Decisions in Therapy (Power Up): Study Protocol.

    PubMed

    Chapman, Louise; Edbrooke-Childs, Julian; Martin, Kate; Webber, Helen; Craven, Michael P; Hollis, Chris; Deighton, Jessica; Law, Roslyn; Fonagy, Peter; Wolpert, Miranda

    2017-10-30

    Evidence suggests that young people want to be active participants in their care and involved in decisions about their treatment. However, there is a lack of digital shared decision-making tools available to support young people in child and adolescent mental health services (CAMHS). The primary aim of this paper is to present the protocol of a feasibility trial for Power Up, a mobile phone app to empower young people in CAMHS to make their voices heard and participate in decisions around their care. In the development phase, 30 young people, parents, and clinicians will take part in interviews and focus groups to elicit opinions on an early version of the app. In the feasibility testing phase, 60 young people from across 7 to 10 London CAMHS sites will take part in a trial looking at the feasibility and acceptability of measuring the impact of Power Up on shared decision making. Data collection for the development phase ended in December 2016. Data collection for the feasibility testing phase will end in December 2017. Findings will inform the planning of a cluster controlled trial and contribute to the development and implementation of a shared decision-making app to be integrated into CAMHS. ISRCTN77194423; http://www.isrctn.com/ISRCTN77194423 (Archived by WebCite at http://www.webcitation.org/6td6MINP0). ClinicalTrials.gov NCT02987608; https://clinicaltrials.gov/ct2/show/NCT02987608 (Archived by WebCite at http://www.webcitation.org/6td6PNBZM). ©Louise Chapman, Julian Edbrooke-Childs, Kate Martin, Helen Webber, Michael P Craven, Chris Hollis, Jessica Deighton, Roslyn Law, Peter Fonagy, Miranda Wolpert. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 30.10.2017.

  11. ChemCalc: a building block for tomorrow's chemical infrastructure.

    PubMed

    Patiny, Luc; Borel, Alain

    2013-05-24

    Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

  12. Air Quality uFIND: User-oriented Tool Set for Air Quality Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Hoijarvi, K.; Robinson, E. M.; Husar, R. B.; Falke, S. R.; Schultz, M. G.; Keating, T. J.

    2012-12-01

    Historically, there have been major impediments to seamless and effective data usage encountered by both data providers and users. Over the last five years, the international Air Quality (AQ) Community has worked through forums such as the Group on Earth Observations AQ Community of Practice, the ESIP AQ Working Group, and the Task Force on Hemispheric Transport of Air Pollution to converge on data format standards (e.g., netCDF), data access standards (e.g., Open Geospatial Consortium Web Coverage Services), metadata standards (e.g., ISO 19115), as well as other conventions (e.g., CF Naming Convention) in order to build an Air Quality Data Network. The centerpiece of the AQ Data Network is the web service-based tool set: user-oriented Filtering and Identification of Networked Data. The purpose of uFIND is to provide rich and powerful facilities for the user to: a) discover and choose a desired dataset by navigation through the multi-dimensional metadata space using faceted search, b) seamlessly access and browse datasets, and c) use uFINDs facilities as a web service for mashups with other AQ applications and portals. In a user-centric information system such as uFIND, the user experience is improved by metadata that includes the general fields for discovery as well as community-specific metadata to narrow the search beyond space, time and generic keyword searches. However, even with the community-specific additions, the ISO 19115 records were formed in compliance with the standard, so that other standards-based search interface could leverage this additional information. To identify the fields necessary for metadata discovery we started with the ISO 19115 Core Metadata fields and fields that were needed for a Catalog Service for the Web (CSW) Record. This fulfilled two goals - one to create valid ISO 19115 records and the other to be able to retrieve the records through a Catalog Service for the Web query. Beyond the required set of fields, the AQ Community added additional fields using a combination of keywords and ISO 19115 fields. These extensions allow discovery by measurement platform or observed phenomena. Beyond discovery metadata, the AQ records include service identification objects that allow standards-based clients, such as some brokers, to access the data found via OGC WCS or WMS data access protocols. uFIND, is one such smart client, this combination of discovery and access metadata allows the user to preview each registered dataset through spatial and temporal views; observe the data access and usage pattern and also find links to dataset-specific metadata directly in uFIND. The AQ data providers also benefit from this architecture since their data products are easier to find and re-use, enhancing the relevance and importance of their products. Finally, the earth science community at large benefits from the Service Oriented Architecture of uFIND, since it is a service itself and allows service-based interfacing with providers and users of the metadata, allowing uFIND facets to be further refined for a particular AQ application or completely repurposed for other Earth Science domains that use the same set of data access and metadata standards.

  13. Can They Plan to Teach with Web 2.0? Future Teachers' Potential Use of the Emerging Web

    ERIC Educational Resources Information Center

    Kale, Ugur

    2014-01-01

    This study examined pre-service teachers' potential use of Web 2.0 technologies for teaching. A coding scheme incorporating the Technological Pedagogical Content Knowledge (TPACK) framework guided the analysis of pre-service teachers' Web 2.0-enhanced learning activity descriptions. The results indicated that while pre-service teachers were able…

  14. Guide to the Internet. The world wide web.

    PubMed Central

    Pallen, M.

    1995-01-01

    The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals. Images p1554-a Fig 5 PMID:8520402

  15. Adopting and adapting a commercial view of web services for the Navy

    NASA Astrophysics Data System (ADS)

    Warner, Elizabeth; Ladner, Roy; Katikaneni, Uday; Petry, Fred

    2005-05-01

    Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others. We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.

  16. Discovery Mechanisms for the Sensor Web

    PubMed Central

    Jirka, Simon; Bröring, Arne; Stasch, Christoph

    2009-01-01

    This paper addresses the discovery of sensors within the OGC Sensor Web Enablement framework. Whereas services like the OGC Web Map Service or Web Coverage Service are already well supported through catalogue services, the field of sensor networks and the according discovery mechanisms is still a challenge. The focus within this article will be on the use of existing OGC Sensor Web components for realizing a discovery solution. After discussing the requirements for a Sensor Web discovery mechanism, an approach will be presented that was developed within the EU funded project “OSIRIS”. This solution offers mechanisms to search for sensors, exploit basic semantic relationships, harvest sensor metadata and integrate sensor discovery into already existing catalogues. PMID:22574038

  17. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    NASA Astrophysics Data System (ADS)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  18. A New Approach for Semantic Web Matching

    NASA Astrophysics Data System (ADS)

    Zamanifar, Kamran; Heidary, Golsa; Nematbakhsh, Naser; Mardukhi, Farhad

    In this work we propose a new approach for semantic web matching to improve the performance of Web Service replacement. Because in automatic systems we should ensure the self-healing, self-configuration, self-optimization and self-management, all services should be always available and if one of them crashes, it should be replaced with the most similar one. Candidate services are advertised in Universal Description, Discovery and Integration (UDDI) all in Web Ontology Language (OWL). By the help of bipartite graph, we did the matching between the crashed service and a Candidate one. Then we chose the best service, which had the maximum rate of matching. In fact we compare two services' functionalities and capabilities to see how much they match. We found that the best way for matching two web services, is comparing the functionalities of them.

  19. Climatological Data Option in My Weather Impacts Decision Aid (MyWIDA) Overview

    DTIC Science & Technology

    2017-07-18

    rules. It consists of 2 databases, a data service server, a collection of web service, and web applications that show weather impacts on selected...3.1.2 ClimoDB 5 3.2 Data Service 5 3.2.1 Data Requestor 5 3.2.2 Data Decoder 6 3.2.3 Post Processor 6 3.2.4 Job Scheduler 6 3.3 Web Service 6...6.1 Additional Data Option 9 6.2 Impact Overlay Web Service 9 6.3 Graphical User Interface 9 7. References 10 List of Symbols, Abbreviations, and

  20. AWS-Glacier As A Storage Foundation For AWS-EC2 Hosted Scientific Data Services

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Potter, N.

    2016-12-01

    Using AWS Glacier as a base level data store for a scientific data service presents new challenges for the web accessible data services, along with their software clients and human operators. All meaningful Glacier transactions take at least 4 hours to complete. This is in contrast to the various web APIs for data such as WMS, WFS, WCS, DAP2, and Netcdf tools which were all written based on the premise that the response will be (nearly) immediate. Only DAP4 and WPS contain an explicit asynchronous component to their respective protocols which allows for "return later" behaviors. We were able to put Hyrax (a DAP4 server) in front of Glacier-held resources, but there were significant issues. Any kind of probing of the datasets happens at the cost of the Glacier retrieval period, 4 hours. A couple of crucial things fall out of this: The first is that the service must cache metadata, including coordinate map arrays, so that a client can have enough information available in the "immediate" time frame to make a decisions about what to ask for from the dataset. This type of request planning is important because a data access request will take 4 hours to complete unless the data resource has been cached. The second thing is that the clients need to change their behavior when accessing datasets in an asynchronous system, even if the metadata is cached. Commonly, client applications will request a number of data components from a DAP2 service in the course of "discovering" the dataset. This may not be a well-supported model of interaction with Glacier or any other high latency data store.

  1. Processing biological literature with customizable Web services supporting interoperable formats.

    PubMed

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. © The Author(s) 2014. Published by Oxford University Press.

  2. Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.

    PubMed

    Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell

    2011-07-26

    Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.

  3. Processing biological literature with customizable Web services supporting interoperable formats

    PubMed Central

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225

  4. 31 CFR 515.578 - Exportation of certain services incident to Internet-based communications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Internet, such as instant messaging, chat and email, social networking, sharing of photos and movies, web... direct or indirect exportation of web-hosting services that are for purposes other than personal communications (e.g., web-hosting services for commercial endeavors) or of domain name registration services. (4...

  5. 31 CFR 515.578 - Exportation of certain services incident to Internet-based communications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Internet, such as instant messaging, chat and email, social networking, sharing of photos and movies, web... direct or indirect exportation of web-hosting services that are for purposes other than personal communications (e.g., web-hosting services for commercial endeavors) or of domain name registration services. (4...

  6. REMORA: a pilot in the ocean of BioMoby web-services.

    PubMed

    Carrere, Sébastien; Gouzy, Jérôme

    2006-04-01

    Emerging web-services technology allows interoperability between multiple distributed architectures. Here, we present REMORA, a web server implemented according to the BioMoby web-service specifications, providing life science researchers with an easy-to-use workflow generator and launcher, a repository of predefined workflows and a survey system. Jerome.Gouzy@toulouse.inra.fr The REMORA web server is freely available at http://bioinfo.genopole-toulouse.prd.fr/remora, sources are available upon request from the authors.

  7. National Centers for Environmental Prediction

    Science.gov Websites

    . Government's official Web portal to all Federal, state and local government Web resources and services. MISSION Web Page [scroll down to "Verification" Section] HRRR Verification at NOAA ESRL HRRR Web Verification Web Page NOAA / National Weather Service National Centers for Environmental Prediction

  8. Component, Context, and Manufacturing Model Library (C2M2L)

    DTIC Science & Technology

    2012-11-01

    123 5.1 MML Population and Web Service Interface...104 Table 41. Relevant Questions with Associated Web Services...the models, and implementing web services that provide semantically aware programmatic access to the models, including implementing the MS&T

  9. The anti-human trafficking collaboration model and serving victims: Providers' perspectives on the impact and experience.

    PubMed

    Kim, Hea-Won; Park, Taekyung; Quiring, Stephanie; Barrett, Diana

    2018-01-01

    A coalition model is often used to serve victims of human trafficking but little is known about whether the model is adequately meeting the needs of the victims. The purpose of this study was to examine anti-human trafficking collaboration model in terms of its impact and the collaborative experience, including challenges and lessons learned from the service providers' perspective. Mixed methods study was conducted to evaluate the impact of a citywide anti-trafficking coalition model from the providers' perspectives. Web-based survey was administered with service providers (n = 32) and focus groups were conducted with Core Group members (n = 10). Providers reported the coalition model has made important impacts in the community by increasing coordination among the key agencies, law enforcement, and service providers and improving quality of service provision. Providers identified the improved and expanded partnerships among coalition members as the key contributing factor to the success of the coalition model. Several key strategies were suggested to improve the coalition model: improved referral tracking, key partner and protocol development, and information sharing.

  10. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447

  11. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    PubMed

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.

  12. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  13. Preparedness and response to bioterrorism.

    PubMed

    Spencer, R C; Lightfoot, N F

    2001-08-01

    As we enter the 21st century the threats of biological warfare and bioterrorism (so called asymmetric threats) appear to be more real than ever before. Historical evidence suggests that biological weapons have been used, with varying degrees of success, for many centuries. Despite the international agreements to ban such weapons, namely the 1925 Geneva Protocol and the 1975 Biological and Toxin Weapons Convention, there is no effective international mechanism for challenging either the development of biological weapons or their use. Advances in technology and the rise of fundamentalist terror groups combine to present a significant threat to western democracies. A timely and definitive response to this threat will require co-operation between governments on a scale never seen before. There is a need for proper planning, good communication between various health, home office, defence and intelligence agencies and sufficient financial support for a realistic state of preparedness. The Department of Health has produced guidelines for responding to real or suspected incidents and the Public Health Laboratory Service (PHLS) has produced detailed protocols to inform the actions required by microbiologists and consultants in communicable disease control. These protocols will be published on the Department of Health and PHLS web sites. Copyright 2001 The British Infection Society.

  14. HotJava: Sun's Animated Interactive World Wide Web Browser for the Internet.

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1995-01-01

    Examines HotJava and Java, World Wide Web technology for use on the Internet. HotJava, an interactive, animated Web browser, based on the object-oriented Java programming language, is different from HTML-based browsers such as Netscape. Its client/server design does not understand Internet protocols but can dynamically find what it needs to know.…

  15. DelPhiPKa web server: predicting pKa of proteins, RNAs and DNAs.

    PubMed

    Wang, Lin; Zhang, Min; Alexov, Emil

    2016-02-15

    A new pKa prediction web server is released, which implements DelPhi Gaussian dielectric function to calculate electrostatic potentials generated by charges of biomolecules. Topology parameters are extended to include atomic information of nucleotides of RNA and DNA, which extends the capability of pKa calculations beyond proteins. The web server allows the end-user to protonate the biomolecule at particular pH based on calculated pKa values and provides the downloadable file in PQR format. Several tests are performed to benchmark the accuracy and speed of the protocol. The web server follows a client-server architecture built on PHP and HTML and utilizes DelPhiPKa program. The computation is performed on the Palmetto supercomputer cluster and results/download links are given back to the end-user via http protocol. The web server takes advantage of MPI parallel implementation in DelPhiPKa and can run a single job on up to 24 CPUs. The DelPhiPKa web server is available at http://compbio.clemson.edu/pka_webserver. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  17. A Knowledge Portal and Collaboration Environment for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    D'Agnese, F. A.

    2008-12-01

    Earth Knowledge is developing a web-based 'Knowledge Portal and Collaboration Environment' that will serve as the information-technology-based foundation of a modular Internet-based Earth-Systems Monitoring, Analysis, and Management Tool. This 'Knowledge Portal' is essentially a 'mash- up' of web-based and client-based tools and services that support on-line collaboration, community discussion, and broad public dissemination of earth and environmental science information in a wide-area distributed network. In contrast to specialized knowledge-management or geographic-information systems developed for long- term and incremental scientific analysis, this system will exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize existing environmental datasets using Google Earth and Google Maps. An early form of these tools and services is being used by Earth Knowledge to facilitate the investigations and conversations of scientists, resource managers, and citizen-stakeholders addressing water resource sustainability issues in the Great Basin region of the desert southwestern United States. These ongoing projects will serve as use cases for the further development of this information-technology infrastructure. This 'Knowledge Portal' will accelerate the deployment of Earth- system data and information into an operational knowledge management system that may be used by decision-makers concerned with stewardship of water resources in the American Desert Southwest.

  18. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    NASA Astrophysics Data System (ADS)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  19. Web-services-based spatial decision support system to facilitate nuclear waste siting

    NASA Astrophysics Data System (ADS)

    Huang, L. Xinglai; Sheng, Grant

    2006-10-01

    The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.

  20. Availability of the OGC geoprocessing standard: March 2011 reality check

    NASA Astrophysics Data System (ADS)

    Lopez-Pellicer, Francisco J.; Rentería-Agualimpia, Walter; Béjar, Rubén; Muro-Medrano, Pedro R.; Zarazaga-Soria, F. Javier

    2012-10-01

    This paper presents an investigation about the servers available in March 2011 conforming to the Web Processing Service interface specification published by the geospatial standards organization Open Geospatial Consortium (OGC) in 2007. This interface specification gives support to standard Web-based geoprocessing. The data used in this research were collected using a focused crawler configured for finding OGC Web services. The research goals are (i) to provide a reality check of the availability of Web Processing Service servers, (ii) to provide quantitative data about the use of different features defined in the standard that are relevant for a scalable Geoprocessing Web (e.g. long-running processes, Web-accessible data outputs), and (iii) to test if the advances in the use of search engines and focused crawlers for finding Web services can be applied for finding geoscience processing systems. Research results show the feasibility of the discovery approach and provide data about the implementation of the Web Processing Service specification. These results also show extensive use of features related to scalability, except for those related to technical and semantic interoperability.

  1. Maintenance and Exchange of Learning Objects in a Web Services Based e-Learning System

    ERIC Educational Resources Information Center

    Vossen, Gottfried; Westerkamp, Peter

    2004-01-01

    "Web services" enable partners to exploit applications via the Internet. Individual services can be composed to build new and more complex ones with additional and more comprehensive functionality. In this paper, we apply the Web service paradigm to electronic learning, and show how to exchange and maintain learning objects is a…

  2. Developer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-21

    NREL's Developer Network, developer.nrel.gov, provides data that users can access to provide data to their own analyses, mobile and web applications. Developers can retrieve the data through a Web services API (application programming interface). The Developer Network handles overhead of serving up web services such as key management, authentication, analytics, reporting, documentation standards, and throttling in a common architecture, while allowing web services and APIs to be maintained and managed independently.

  3. API REST Web service and backend system Of Lecturer’s Assessment Information System on Politeknik Negeri Bali

    NASA Astrophysics Data System (ADS)

    Manuaba, I. B. P.; Rudiastini, E.

    2018-01-01

    Assessment of lecturers is a tool used to measure lecturer performance. Lecturer’s assessment variable can be measured from three aspects : teaching activities, research and community service. Broad aspect to measure the performance of lecturers requires a special framework, so that the system can be developed in a sustainable manner. Issues of this research is to create a API web service data tool, so the lecturer assessment system can be developed in various frameworks. The research was developed with web service and php programming language with the output of json extension data. The conclusion of this research is API web service data application can be developed using several platforms such as web, mobile application

  4. Attachment Style and Internet Addiction: An Online Survey

    PubMed Central

    Schott, Markus; Decker, Oliver; Sindelar, Brigitte

    2017-01-01

    Background One of the clinically relevant problems of Internet use is the phenomenon of Internet addiction. Considering the fact that there is ample evidence for the relationship between attachment style and substance abuse, it stands to reason that attachment theory can also make an important contribution to the understanding of the pathogenesis of Internet addiction. Objective The aim of this study was to examine people’s tendency toward pathological Internet usage in relation to their attachment style. Methods An online survey was conducted. Sociodemographic data, attachment style (Bielefeld questionnaire partnership expectations), symptoms of Internet addiction (scale for online addiction for adults), used Web-based services, and online relationship motives (Cyber Relationship Motive Scale, CRMS-D) were assessed. In order to confirm the findings, a study using the Rorschach test was also conducted. Results In total, 245 subjects were recruited. Participants with insecure attachment style showed a higher tendency to pathological Internet usage compared with securely attached participants. An ambivalent attachment style was particularly associated with pathological Internet usage. Escapist and social-compensatory motives played an important role for insecurely attached subjects. However, there were no significant effects with respect to Web-based services and apps used. Results of the analysis of the Rorschach protocol with 16 subjects corroborated these results. Users with pathological Internet use frequently showed signs of infantile relationship structures in the context of social groups. This refers to the results of the Web-based survey, in which interpersonal relationships were the result of an insecure attachment style. Conclusions Pathological Internet use was a function of insecure attachment and limited interpersonal relationships. PMID:28526662

  5. Development of a Real-Time Repeated-Measures Assessment Protocol to Capture Change over the Course of a Drinking Episode

    PubMed Central

    Luczak, Susan E.; Rosen, I. Gary; Wall, Tamara L.

    2015-01-01

    Aims: We report on the development of a real-time assessment protocol that allows researchers to assess change in BrAC, alcohol responses, behaviors, and contexts over the course of a drinking event. Method: We designed a web application that uses timed text messages (adjusted based on consumption pattern) containing links to our website to obtain real-time participant reports; camera and location features were also incorporated into the protocol. We used a transdermal alcohol sensor device along with software we designed to convert transdermal data into estimated BrAC. Thirty-two college students completed a laboratory session followed by a 2-week field trial. Results: Results for the web application indicated we were able to create an effective tool for obtaining repeated measures real-time drinking data. Participants were willing to monitor their drinking behavior with the web application, and this did not appear to strongly affect drinking behavior during, or 6 weeks following, the field trial. Results for the transdermal device highlighted the willingness of participants to wear the device despite some discomfort, but technical difficulties resulted in limited valid data. Conclusion: The development of this protocol makes it possible to capture detailed assessment of change over the course of naturalistic drinking episodes. PMID:25568142

  6. Utilization of services in a randomized trial testing phone- and web-based interventions for smoking cessation.

    PubMed

    Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E

    2011-05-01

    Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.

  7. Implementation of Sensor Twitter Feed Web Service Server and Client

    DTIC Science & Technology

    2016-12-01

    ARL-TN-0807 ● DEC 2016 US Army Research Laboratory Implementation of Sensor Twitter Feed Web Service Server and Client by...Implementation of Sensor Twitter Feed Web Service Server and Client by Bhagyashree V Kulkarni University of Maryland Michael H Lee Computational...

  8. IVOA Credential Delegation Protocol Version 1.0

    NASA Astrophysics Data System (ADS)

    Plante, Raymond; Graham, Matthew; Rixon, Guy; Taffoni, Giuliano; Plante, Raymond; Graham, Matthew

    2010-02-01

    The credential delegation protocol allows a client program to delegate a user's credentials to a service such that that service may make requests of other services in the name of that user. The protocol defines a REST service that works alongside other IVO services to enable such a delegation in a secure manner. In addition to defining the specifics of the service protocol, this document describes how a delegation service is registered in an IVOA registry along with the services it supports. The specification also explains how one can determine from a service registration that it requires the use of a supporting delegation service.

  9. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows

    PubMed Central

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production. PMID:26068216

  10. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows.

    PubMed

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.

  12. Web services as applications' integration tool: QikProp case study.

    PubMed

    Laoui, Abdel; Polyakov, Valery R

    2011-07-15

    Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.

  13. Measuring and Evaluating TCP Splitting for Cloud Services

    NASA Astrophysics Data System (ADS)

    Pathak, Abhinav; Wang, Y. Angela; Huang, Cheng; Greenberg, Albert; Hu, Y. Charlie; Kern, Randy; Li, Jin; Ross, Keith W.

    In this paper, we examine the benefits of split-TCP proxies, deployed in an operational world-wide network, for accelerating cloud services. We consider a fraction of a network consisting of a large number of satellite datacenters, which host split-TCP proxies, and a smaller number of mega datacenters, which ultimately perform computation or provide storage. Using web search as an exemplary case study, our detailed measurements reveal that a vanilla TCP splitting solution deployed at the satellite DCs reduces the 95 th percentile of latency by as much as 43% when compared to serving queries directly from the mega DCs. Through careful dissection of the measurement results, we characterize how individual components, including proxy stacks, network protocols, packet losses and network load, can impact the latency. Finally, we shed light on further optimizations that can fully realize the potential of the TCP splitting solution.

  14. Using USNO's API to Obtain Data

    NASA Astrophysics Data System (ADS)

    Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun

    2015-01-01

    The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.

  15. WebGIS based on semantic grid model and web services

    NASA Astrophysics Data System (ADS)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.

  16. Uptake of a web-based oncology protocol system: how do cancer clinicians use eviQ cancer treatments online?

    PubMed Central

    2013-01-01

    Background The use of computerized systems to support evidence-based practice is commonplace in contemporary medicine. Despite the prolific use of electronic support systems there has been relatively little research on the uptake of web-based systems in the oncology setting. Our objective was to examine the uptake of a web-based oncology protocol system (http://www.eviq.org.au) by Australian cancer clinicians. Methods We used web-logfiles and Google Analytics to examine the characteristics of eviQ registrants from October 2009-December 2011 and patterns of use by cancer clinicians during a typical month. Results As of December 2011, there were 16,037 registrants; 85% of whom were Australian health care professionals. During a typical month 87% of webhits occurred in standard clinical hours (08:00 to 18:00 weekdays). Raw webhits were proportional to the size of clinician groups: nurses (47% of Australian registrants), followed by doctors (20%), and pharmacists (14%). However, pharmacists had up to three times the webhit rate of other clinical groups. Clinicians spent five times longer viewing chemotherapy protocol pages than other content and the protocols viewed reflect the most common cancers: lung, breast and colorectal. Conclusions Our results demonstrate eviQ is used by a range of health professionals involved in cancer treatment at the point-of-care. Continued monitoring of electronic decision support systems is vital to understanding how they are used in clinical practice and their impact on processes of care and patient outcomes. PMID:23497080

  17. Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems

    NASA Astrophysics Data System (ADS)

    Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn

    The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.

  18. A resource-oriented architecture for a Geospatial Web

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary, systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping applications); • Successful Web and Web 2.0 applications - search engines, feeds, social network - could be integrated/replicated in the Geospatial Web; The main drawbacks would be the following: • The Uniform Interface simplifies the overall system architecture (e.g. no service registry, and service descriptors required), but moves the complexity to the data representation. Moreover since the interface must stay generic, it results really simple and therefore complex interactions would require several transfers. • In the geospatial domain one of the most valuable resources are processes (e.g. environmental models). How they can be modeled as resources accessed through the common interface is an open issue. Taking into account advantages and drawback it seems that a Geospatial Web would be useful, but its use would be limited to specific use-cases not covering all the possible applications. The Geospatial Web architecture could be partly based on existing specifications, while other aspects need investigation. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Fielding 2000] Fielding, R. T. 2000. Architectural styles and the design of network-based software architectures. PhD Dissertation. Dept. of Information and Computer Science, University of California, Irvine

  19. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  20. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  1. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-12-27

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  2. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  3. Technical Services and the World Wide Web.

    ERIC Educational Resources Information Center

    Scheschy, Virginia M.

    The World Wide Web and browsers such as Netscape and Mosaic have simplified access to electronic resources. Today, technical services librarians can share in the wealth of information available on the Web. One of the premier Web sites for acquisitions librarians is AcqWeb, a cousin of the AcqNet listserv. In addition to interesting news items,…

  4. Genevar: a database and Java application for the analysis and visualization of SNP-gene associations in eQTL studies

    PubMed Central

    Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.

    2010-01-01

    Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402

  5. The Internet for Librarians

    NASA Astrophysics Data System (ADS)

    Grothkopf, U.

    Librarianship is currently undergoing major changes. New information sources, accessible via the "network of networks", the Internet, offer opportunities which were previously unknown, but which require continuous ongoing learning. The Internet seems to be organized badly or not at all. The poor appearance might lead to an underestimation of its value. In the following, an introduction to the main functions will be given in order to facilitate understanding and use of the Internet. E-Mail, FTP (File Transfer Protocol) and Telnet will be covered, as well as Mailing lists, Newsgroups and the tools Archie, Gopher, Veronica, WAIS (Wide Area Information Server) and the World Wide Web (WWW). Examples will be given to show possible applications for library services.

  6. Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.

  7. Engineering Analysis Using a Web-based Protocol

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.; Claus, Russell W.

    2002-01-01

    This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.

  8. Dumbing Down the Net

    NASA Astrophysics Data System (ADS)

    Jamison, Mark A.; Hauge, Janice A.

    It is commonplace for sellers of goods and services to enhance the value of their products by paying extra for premium delivery service. For example, package delivery services such as Federal Express and the US Postal Service offer shippers a variety of delivery speeds and insurance programs. Web content providers such as Yahoo! and MSN Live Earth can purchase web-enhancing services from companies such as Akamai to speed the delivery of their web content to customers.1

  9. Determinants of Corporate Web Services Adoption: A Survey of Companies in Korea

    ERIC Educational Resources Information Center

    Kim, Daekil

    2010-01-01

    Despite the growing interest and attention from Information Technology researchers and practitioners, empirical research on factors that influence an organization's likelihood of adoption of Web Services has been limited. This study identified the factors influencing Web Services adoption from the perspective of 151 South Korean firms. The…

  10. Web 2.0 Strategy in Libraries and Information Services

    ERIC Educational Resources Information Center

    Byrne, Alex

    2008-01-01

    Web 2.0 challenges libraries to change from their predominantly centralised service models with integrated library management systems at the hub. Implementation of Web 2.0 technologies and the accompanying attitudinal shifts will demand reconceptualisation of the nature of library and information service around a dynamic, ever changing, networked,…

  11. WIWS: a protein structure bioinformatics Web service collection.

    PubMed

    Hekkelman, M L; Te Beek, T A H; Pettifer, S R; Thorne, D; Attwood, T K; Vriend, G

    2010-07-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of programmatically accessible web services. We report here a collection of WHAT IF-based protein structure bioinformatics web services: these relate to structure quality, the use of symmetry in crystal structures, structure correction and optimization, adding hydrogens and optimizing hydrogen bonds and a series of geometric calculations. The freely accessible web services are based on the industry standard WS-I profile and the EMBRACE technical guidelines, and are available via both REST and SOAP paradigms. The web services run on a dedicated computational cluster; their function and availability is monitored daily.

  12. Rationale and development of an on-line quality assurance programme for colposcopy in a population-based cervical screening setting in Italy.

    PubMed

    Bucchi, Lauro; Cristiani, Paolo; Costa, Silvano; Schincaglia, Patrizia; Garutti, Paola; Sassoli de Bianchi, Priscilla; Naldoni, Carlo; Olea, Oswaldo; Sideri, Mario

    2013-06-28

    Colposcopy, the key step in the management of women with abnormal Pap smear results, is a visual technique prone to observer variation, which implies the need for prolonged apprenticeship, continuous training, and quality assurance (QA) measures. Colposcopy QA programmes vary in level of responsibility of organizing subjects, geographic coverage, scope, model, and type of actions. The programmes addressing the clinical standards of colposcopy (quality of examination and appropriateness of clinical decisions) are more limited in space and less sustainable over time than those focused on the provision of the service (resources, accessibility, etc.). This article reports on the protocol of a QA programme targeting the clinical quality of colposcopy in a population-based cervical screening service in an administrative region of northern Italy. After a situation analysis of local colposcopy audit practices and previous QA initiatives, a permanent web-based QA programme was developed. The design places more emphasis on providing education and feedback to participants than on testing them. The technical core is a log-in web application accessible on the website of the regional Administration. The primary objectives are to provide (1) a practical opportunity for retraining of screening colposcopists, and (2) a platform for them to interact with colposcopists from other settings and regions through exchange and discussion of digital colposcopic images. The retraining function is based on repeated QA sessions in which the registered colposcopists log-in, classify a posted set of colpophotographs, and receive on line a set of personal feedback data. Each session ends with a plenary seminar featuring the presentation of overall results and an interactive review of the test set of colpophotographs. This is meant to be a forum for an open exchange of views that may lead to more knowledge and more diagnostic homogeneity. The protocol includes the criteria for selection of colpophotographs and the rationale for colposcopic gold standards. This programme is an ongoing initiative open to further developments, in particular in the area of basic training. It uses the infrastructure of the internet to give a novel solution to technical problems affecting colposcopy QA in population-based screening services.

  13. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    PubMed

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  14. Evaluating a NoSQL Alternative for Chilean Virtual Observatory Services

    NASA Astrophysics Data System (ADS)

    Antognini, J.; Araya, M.; Solar, M.; Valenzuela, C.; Lira, F.

    2015-09-01

    Currently, the standards and protocols for data access in the Virtual Observatory architecture (DAL) are generally implemented with relational databases based on SQL. In particular, the Astronomical Data Query Language (ADQL), language used by IVOA to represent queries to VO services, was created to satisfy the different data access protocols, such as Simple Cone Search. ADQL is based in SQL92, and has extra functionality implemented using PgSphere. An emergent alternative to SQL are the so called NoSQL databases, which can be classified in several categories such as Column, Document, Key-Value, Graph, Object, etc.; each one recommended for different scenarios. Within their notable characteristics we can find: schema-free, easy replication support, simple API, Big Data, etc. The Chilean Virtual Observatory (ChiVO) is developing a functional prototype based on the IVOA architecture, with the following relevant factors: Performance, Scalability, Flexibility, Complexity, and Functionality. Currently, it's very difficult to compare these factors, due to a lack of alternatives. The objective of this paper is to compare NoSQL alternatives with SQL through the implementation of a Web API REST that satisfies ChiVO's needs: a SESAME-style name resolver for the data from ALMA. Therefore, we propose a test scenario by configuring a NoSQL database with data from different sources and evaluating the feasibility of creating a Simple Cone Search service and its performance. This comparison will allow to pave the way for the application of Big Data databases in the Virtual Observatory.

  15. 78 FR 14701 - Misuse of Internet Protocol (IP) Captioned Telephone Service; Telecommunications Relay Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ...] Misuse of Internet Protocol (IP) Captioned Telephone Service; Telecommunications Relay Services and..., the information collection associated with the Commission's Misuse of Internet Protocol (IP) Captioned... Registration and Documentation of Disability for Eligibility to Use IP Captioned Telephone Service, CG Docket...

  16. Utilization of Services in a Randomized Trial Testing Phone- and Web-Based Interventions for Smoking Cessation

    PubMed Central

    Jack, Lisa M.; McClure, Jennifer B.; Deprey, Mona; Javitz, Harold S.; McAfee, Timothy A.; Catz, Sheryl L.; Richards, Julie; Bush, Terry; Swan, Gary E.

    2011-01-01

    Introduction: Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone–Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. Methods: One thousand two hundred and two participants were randomized to phone, Web, or combined phone–Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Results: Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone–Web, 41% Web), and those in the phone–Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Conclusions: Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities. PMID:21330267

  17. Method of Performance-Aware Security of Unicast Communication in Hybrid Satellite Networks

    NASA Technical Reports Server (NTRS)

    Baras, John S. (Inventor); Roy-Chowdhury, Ayan (Inventor)

    2014-01-01

    A method and apparatus utilizes Layered IPSEC (LES) protocol as an alternative to IPSEC for network-layer security including a modification to the Internet Key Exchange protocol. For application-level security of web browsing with acceptable end-to-end delay, the Dual-mode SSL protocol (DSSL) is used instead of SSL. The LES and DSSL protocols achieve desired end-to-end communication security while allowing the TCP and HTTP proxy servers to function correctly.

  18. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.

  19. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems at Australia's National Computing Infrastructure and show how one can 'click' through provenance interfaces to see the dual representations of the requests using provenance management tooling we have built.

  20. 78 FR 54201 - Misuse of Internet Protocol (IP) Captioned Telephone Service; Telecommunications Relay Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-03

    ...] Misuse of Internet Protocol (IP) Captioned Telephone Service; Telecommunications Relay Services and... further possible actions necessary to improve internet protocol captioned telephone relay service (IP CTS... for calculating the compensation rate paid to IP CTS providers. This action is necessary to ensure...

  1. Barriers to reporting child maltreatment: do emergency medical services professionals fully understand their role as mandatory reporters?

    PubMed

    Lynne, Ellen Grace; Gifford, Elizabeth J; Evans, Kelly E; Rosch, Joel B

    2015-01-01

    Child maltreatment is underreported in the United States and in North Carolina. In North Carolina and other states, mandatory reporting laws require various professionals to make reports, thereby helping to reduce underreporting of child maltreatment. This study aims to understand why emergency medical services (EMS) professionals may fail to report suspicions of maltreatment despite mandatory reporting policies. A web-based, anonymous, voluntary survey of EMS professionals in North Carolina was used to assess knowledge of their agency's written protocols and potential reasons for underreporting suspicion of maltreatment (n=444). Results were based on descriptive statistics. Responses of line staff and leadership personnel were compared using chi-square analysis. Thirty-eight percent of respondents were unaware of their agency's written protocols regarding reporting of child maltreatment. Additionally, 25% of EMS professionals who knew of their agency's protocol incorrectly believed that the report should be filed by someone other than the person with firsthand knowledge of the suspected maltreatment. Leadership personnel generally understood reporting requirements better than did line staff. Respondents indicated that peers may fail to report maltreatment for several reasons: they believe another authority would file the report, including the hospital (52.3%) or law enforcement (27.7%); they are uncertain whether they had witnessed abuse (47.7%); and they are uncertain about what should be reported (41.4%). This survey may not generalize to all EMS professionals in North Carolina. Training opportunities for EMS professionals that address proper identification and reporting of child maltreatment, as well as cross-agency information sharing, are warranted.

  2. End-to-end Cyberinfrastructure and Data Services for Earth System Science Education and Research: Unidata's Plans and Directions

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M.

    2005-12-01

    A revolution is underway in the role played by cyberinfrastructure and data services in the conduct of research and education. We live in an era of an unprecedented data volume from diverse sources, multidisciplinary analysis and synthesis, and active, learner-centered education emphasis. For example, modern remote-sensing systems like hyperspectral satellite instruments generate terabytes of data each day. Environmental problems such as global change and water cycle transcend disciplinary as well as geographic boundaries, and their solution requires integrated earth system science approaches. Contemporary education strategies recommend adopting an Earth system science approach for teaching the geosciences, employing new pedagogical techniques such as enquiry-based learning and hands-on activities. Needless to add, today's education and research enterprise depends heavily on robust, flexible and scalable cyberinfrastructure, especially on the ready availability of quality data and appropriate tools to manipulate and integrate those data. Fortuitously, rapid advances in computing and communication technologies have also revolutionized how data, tools and services are being incorporated into the teaching and scientific enterprise. The exponential growth in the use of the Internet in education and research, largely due to the advent of the World Wide Web, is by now well documented. On the other hand, how some of the other technological and community trends that have shaped the use of cyberinfrastructure, especially data services, is less well understood. For example, the computing industry is converging on an approach called Web services that enables a standard and yet revolutionary way of building applications and methods to connect and exchange information over the Web. This new approach, based on XML - a widely accepted format for exchanging data and corresponding semantics over the Internet - enables applications, computer systems, and information processes to work together in a fundamentally different way. Likewise, the advent of digital libraries, grid computing platforms, interoperable frameworks, standards and protocols, open-source software, and community atmospheric models have been important drivers in shaping the use of a new generation of end-to-end cyberinfrastructure for solving some of the most challenging scientific and educational problems. In this talk, I will present an overview of the scientific, technological, and educational drivers and discuss recent developments in cyberinfrastructure and Unidata's role and directions in providing robust, end-to-end data services for solving geoscientific problems and advancing student learning.

  3. A flexible geospatial sensor observation service for diverse sensor data based on Web service

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Min, Min

    Achieving a flexible and efficient geospatial Sensor Observation Service (SOS) is difficult, given the diversity of sensor networks, the heterogeneity of sensor data storage, and the differing requirements of users. This paper describes development of a service-oriented multi-purpose SOS framework. The goal is to create a single method of access to the data by integrating the sensor observation service with other Open Geospatial Consortium (OGC) services — Catalogue Service for the Web (CSW), Transactional Web Feature Service (WFS-T) and Transactional Web Coverage Service (WCS-T). The framework includes an extensible sensor data adapter, an OGC-compliant geospatial SOS, a geospatial catalogue service, a WFS-T, and a WCS-T for the SOS, and a geospatial sensor client. The extensible sensor data adapter finds, stores, and manages sensor data from live sensors, sensor models, and simulation systems. Abstract factory design patterns are used during design and implementation. A sensor observation service compatible with the SWE is designed, following the OGC "core" and "transaction" specifications. It is implemented using Java servlet technology. It can be easily deployed in any Java servlet container and automatically exposed for discovery using Web Service Description Language (WSDL). Interaction sequences between a Sensor Web data consumer and an SOS, between a producer and an SOS, and between an SOS and a CSW are described in detail. The framework has been successfully demonstrated in application scenarios for EO-1 observations, weather observations, and water height gauge observations.

  4. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  5. The impact of a state-sponsored mass media campaign on use of telephone quitline and web-based cessation services.

    PubMed

    Duke, Jennifer C; Mann, Nathan; Davis, Kevin C; MacMonegle, Anna; Allen, Jane; Porter, Lauren

    2014-12-24

    Most US smokers do not use evidence-based interventions as part of their quit attempts. Quitlines and Web-based treatments may contribute to reductions in population-level tobacco use if successfully promoted. Currently, few states implement sustained media campaigns to promote services and increase adult smoking cessation. This study examines the effects of Florida's tobacco cessation media campaign and a nationally funded media campaign on telephone quitline and Web-based registrations for cessation services from November 2010 through September 2013. We conducted multivariable analyses of weekly media-market-level target rating points (TRPs) and weekly registrations for cessation services through the Florida Quitline (1-877-U-CAN-NOW) or its Web-based cessation service, Web Coach (www.quitnow.net/florida). During 35 months, 141,221 tobacco users registered for cessation services through the Florida Quitline, and 53,513 registered through Web Coach. An increase in 100 weekly TRPs was associated with an increase of 7 weekly Florida Quitline registrants (β = 6.8, P < .001) and 2 Web Coach registrants (β = 1.7, P = .003) in an average media market. An increase in TRPs affected registrants from multiple demographic subgroups similarly. When state and national media campaigns aired simultaneously, approximately one-fifth of Florida's Quitline registrants came from the nationally advertised portal (1-800-QUIT-NOW). Sustained, state-sponsored media can increase the number of registrants to telephone quitlines and Web-based cessation services. Federally funded media campaigns can further increase the reach of state-sponsored cessation services.

  6. The Impact of a State-Sponsored Mass Media Campaign on Use of Telephone Quitline and Web-Based Cessation Services

    PubMed Central

    Mann, Nathan; Davis, Kevin C.; MacMonegle, Anna; Allen, Jane; Porter, Lauren

    2014-01-01

    Introduction Most US smokers do not use evidence-based interventions as part of their quit attempts. Quitlines and Web-based treatments may contribute to reductions in population-level tobacco use if successfully promoted. Currently, few states implement sustained media campaigns to promote services and increase adult smoking cessation. This study examines the effects of Florida’s tobacco cessation media campaign and a nationally funded media campaign on telephone quitline and Web-based registrations for cessation services from November 2010 through September 2013. Methods We conducted multivariable analyses of weekly media-market–level target rating points (TRPs) and weekly registrations for cessation services through the Florida Quitline (1-877-U-CAN-NOW) or its Web-based cessation service, Web Coach (www.quitnow.net/florida). Results During 35 months, 141,221 tobacco users registered for cessation services through the Florida Quitline, and 53,513 registered through Web Coach. An increase in 100 weekly TRPs was associated with an increase of 7 weekly Florida Quitline registrants (β = 6.8, P < .001) and 2 Web Coach registrants (β = 1.7, P = .003) in an average media market. An increase in TRPs affected registrants from multiple demographic subgroups similarly. When state and national media campaigns aired simultaneously, approximately one-fifth of Florida’s Quitline registrants came from the nationally advertised portal (1-800-QUIT-NOW). Conclusion Sustained, state-sponsored media can increase the number of registrants to telephone quitlines and Web-based cessation services. Federally funded media campaigns can further increase the reach of state-sponsored cessation services. PMID:25539129

  7. Development of XML Schema for Broadband Digital Seismograms and Data Center Portal

    NASA Astrophysics Data System (ADS)

    Takeuchi, N.; Tsuboi, S.; Ishihara, Y.; Nagao, H.; Yamagishi, Y.; Watanabe, T.; Yanaka, H.; Yamaji, H.

    2008-12-01

    There are a number of data centers around the globe, where the digital broadband seismograms are opened to researchers. Those centers use their own user interfaces and there are no standard to access and retrieve seismograms from different data centers using unified interface. One of the emergent technologies to realize unified user interface for different data centers is the concept of WebService and WebService portal. Here we have developed a prototype of data center portal for digital broadband seismograms. This WebService portal uses WSDL (Web Services Description Language) to accommodate differences among the different data centers. By using the WSDL, alteration and addition of data center user interfaces can be easily managed. This portal, called NINJA Portal, assumes three WebServices: (1) database Query service, (2) Seismic event data request service, and (3) Seismic continuous data request service. Current system supports both station search of database Query service and seismic continuous data request service. Data centers supported by this NINJA portal will be OHP data center in ERI and Pacific21 data center in IFREE/JAMSTEC in the beginning. We have developed metadata standard for seismological data based on QuakeML for parametric data, which has been developed by ETH Zurich, and XML-SEED for waveform data, which was developed by IFREE/JAMSTEC. The prototype of NINJA portal is now released through IFREE web page (http://www.jamstec.go.jp/pacific21/).

  8. 75 FR 75170 - APHIS User Fee Web Site

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-02

    ...] APHIS User Fee Web Site AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice... recover the costs of providing certain services. This notice announces the availability of a Web site that contains information about the Agency's user fees. ADDRESSES: The Agency's user fee Web site is located at...

  9. 47 CFR 9.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.3... VoIP service. An interconnected Voice over Internet protocol (VoIP) service is a service that: (1... location; (3) Requires Internet protocol-compatible customer premises equipment (CPE); and (4) Permits...

  10. 47 CFR 9.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.3... VoIP service. An interconnected Voice over Internet protocol (VoIP) service is a service that: (1... location; (3) Requires Internet protocol-compatible customer premises equipment (CPE); and (4) Permits...

  11. 47 CFR 9.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.3... VoIP service. An interconnected Voice over Internet protocol (VoIP) service is a service that: (1... location; (3) Requires Internet protocol-compatible customer premises equipment (CPE); and (4) Permits...

  12. Measurement issues related to data collection on the World Wide Web.

    PubMed

    Strickland, Ora L; Moloney, Margaret F; Dietrich, Alexa S; Myerburg, Stuart; Cotsonis, George A; Johnson, Robert V

    2003-01-01

    As the World Wide Web has become more prominent as a mode of communication, it has opened up new possibilities for research data collection. This article identifies measurement issues that occur with Internet data collection that are relevant to qualitative and quantitative research approaches as they occurred in a triangulated Internet study of perimenopausal women with migraine headaches. Issues associated with quantitative data collection over the Internet include (a) selecting and designing Internet data collection protocols that adequately address study aims while also taking advantage of the Internet, (b) ensuring the reliability and validity of Internet data collected, (c) adapting quantitative paper-and-pencil data collection protocols for the Internet, (d) making Internet data collection practical for respondents and researchers, and (e) ensuring the quality of quantitative data collected. Qualitative data collection over the Internet needs to remain true to the philosophical stance of the qualitative approach selected. Researcher expertise in qualitative data collection must be combined with expertise in computer technology and information services if data are to be of ultimate quality The advantages and limitations of collecting qualitative data in real time or at a later time are explored, as well as approaches to enhance qualitative data collection over the Internet. It was concluded that like any research approach or method, Internet data collection requires considerable creativity, expertise, and planning to take advantage of the technology for the collection of reliable and valid research data.

  13. MedlinePlus Connect: Technical Information

    MedlinePlus

    ... Service Technical Information Page MedlinePlus Connect Implementation Options Web Application How does it work? Responds to requests ... examples of MedlinePlus Connect Web Application response pages. Web Service How does it work? Responds to requests ...

  14. Developing Web Services for Technology Education. The Graphic Communication Electronic Publishing Project.

    ERIC Educational Resources Information Center

    Sanders, Mark

    1999-01-01

    Graphic Communication Electronic Publishing Project supports a Web site (http://TechEd.vt.edu/gcc/) for graphic communication teachers and students, providing links to Web materials, conversion of print materials to electronic formats, and electronic products and services including job listings, resume posting service, and a listserv. (SK)

  15. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    PubMed

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. An Investigation into the Use of 3G Mobile Communications to Provide Telehealth Services in Rural KwaZulu-Natal

    PubMed Central

    Mars, Maurice

    2015-01-01

    Abstract Background: We investigated the use of third-generation (3G) mobile communications to provide telehealth services in remote health clinics in rural KwaZulu-Natal, South Africa. Materials and Methods: We specified a minimal set of services as our use case that would be representative of typical activity and to provide a baseline for analysis of network performance. Services included database access to manage chronic disease, local support and management of patients (to reduce unnecessary travel to the hospital), emergency care (up to 8 h for an ambulance to arrive), e-mail, access to up-to-date information (Web), and teleclinics. We made site measurements at a representative set of health clinics to determine the type of coverage (general packet radio service [GPRS]/3G), its capabilities to support videoconferencing (H323 and Skype™ [Microsoft, Redmond, WA]) and audio (Skype), and throughput for transmission control protocol (TCP) to gain a measure of application performance. Results: We found that none of the remote health clinics had 3G service. The GPRS service provided typical upload speed of 44 kilobits per second (Kbps) and download speed of 64 Kbps. This was not sufficient to support any form of videoconferencing. We also observed that GPRS had significant round trip time (RTT), in some cases in excess of 750 ms, and this led to slow start-up for TCP applications. Conclusions: We found audio was always so broken as to be unusable and further observed that many applications such as Web access would fail under conditions of very high RTT. We found some health clinics were so remote that they had no mobile service. 3G, where available, had measured upload speed of 331 Kbps and download speed of 446 Kbps and supported videoconferencing and audio at all sites, but we frequently experienced 3G changing to GPRS. We conclude that mobile communications currently provide insufficient coverage and capability to provide reliable clinical services and would advocate dedicated wireless services where reliable communication is essential and use of store and forward for mobile applications. PMID:24926731

  17. An investigation into the use of 3G mobile communications to provide telehealth services in rural KwaZulu-Natal.

    PubMed

    Clarke, Malcolm; Mars, Maurice

    2015-02-01

    We investigated the use of third-generation (3G) mobile communications to provide telehealth services in remote health clinics in rural KwaZulu-Natal, South Africa. We specified a minimal set of services as our use case that would be representative of typical activity and to provide a baseline for analysis of network performance. Services included database access to manage chronic disease, local support and management of patients (to reduce unnecessary travel to the hospital), emergency care (up to 8 h for an ambulance to arrive), e-mail, access to up-to-date information (Web), and teleclinics. We made site measurements at a representative set of health clinics to determine the type of coverage (general packet radio service [GPRS]/3G), its capabilities to support videoconferencing (H323 and Skype™ [Microsoft, Redmond, WA]) and audio (Skype), and throughput for transmission control protocol (TCP) to gain a measure of application performance. We found that none of the remote health clinics had 3G service. The GPRS service provided typical upload speed of 44 kilobits per second (Kbps) and download speed of 64 Kbps. This was not sufficient to support any form of videoconferencing. We also observed that GPRS had significant round trip time (RTT), in some cases in excess of 750 ms, and this led to slow start-up for TCP applications. We found audio was always so broken as to be unusable and further observed that many applications such as Web access would fail under conditions of very high RTT. We found some health clinics were so remote that they had no mobile service. 3G, where available, had measured upload speed of 331 Kbps and download speed of 446 Kbps and supported videoconferencing and audio at all sites, but we frequently experienced 3G changing to GPRS. We conclude that mobile communications currently provide insufficient coverage and capability to provide reliable clinical services and would advocate dedicated wireless services where reliable communication is essential and use of store and forward for mobile applications.

  18. Re-Framing the World Wide Web

    ERIC Educational Resources Information Center

    Black, August

    2011-01-01

    The research presented in this dissertation studies and describes how technical standards, protocols, and application programming interfaces (APIs) shape the aesthetic, functional, and affective nature of our most dominant mode of online communication, the World Wide Web (WWW). I examine the politically charged and contentious battle over browser…

  19. Intranets: Just Another Bandwagon?

    ERIC Educational Resources Information Center

    Lynch, Gary

    1997-01-01

    Discusses intranets--the deployment and use of Internet technologies such as the World Wide Web, electronic mail, and Transmission Control Protocol/Internet Protocol (TCP/IP) on a closed network. Considers the "hype," benefits, standards, implementation, and problems of intranets, and concludes that while intranets can be beneficial,…

  20. Integrating geo web services for a user driven exploratory analysis

    NASA Astrophysics Data System (ADS)

    Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate

    2016-04-01

    In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.

  1. Spatial Data Services for Interdisciplinary Applications from the NASA Socioeconomic Data and Applications Center

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; MacManus, K.; Vinay, S.; Yetman, G.

    2016-12-01

    The Socioeconomic Data and Applications Center (SEDAC), one of 12 Distributed Active Archive Centers (DAACs) in the NASA Earth Observing System Data and Information System (EOSDIS), has developed a variety of operational spatial data services aimed at providing online access, visualization, and analytic functions for geospatial socioeconomic and environmental data. These services include: open web services that implement Open Geospatial Consortium (OGC) specifications such as Web Map Service (WMS), Web Feature Service (WFS), and Web Coverage Service (WCS); spatial query services that support Web Processing Service (WPS) and Representation State Transfer (REST); and web map clients and a mobile app that utilize SEDAC and other open web services. These services may be accessed from a variety of external map clients and visualization tools such as NASA's WorldView, NOAA's Climate Explorer, and ArcGIS Online. More than 200 data layers related to population, settlements, infrastructure, agriculture, environmental pollution, land use, health, hazards, climate change and other aspects of sustainable development are available through WMS, WFS, and/or WCS. Version 2 of the SEDAC Population Estimation Service (PES) supports spatial queries through WPS and REST in the form of a user-defined polygon or circle. The PES returns an estimate of the population residing in the defined area for a specific year (2000, 2005, 2010, 2015, or 2020) based on SEDAC's Gridded Population of the World version 4 (GPWv4) dataset, together with measures of accuracy. The SEDAC Hazards Mapper and the recently released HazPop iOS mobile app enable users to easily submit spatial queries to the PES and see the results. SEDAC has developed an operational virtualized backend infrastructure to manage these services and support their continual improvement as standards change, new data and services become available, and user needs evolve. An ongoing challenge is to improve the reliability and performance of the infrastructure, in conjunction with external services, to meet both research and operational needs.

  2. An energy-efficient architecture for internet of things systems

    NASA Astrophysics Data System (ADS)

    De Rango, Floriano; Barletta, Domenico; Imbrogno, Alessandro

    2016-05-01

    In this paper some of the motivations for energy-efficient communications in wireless systems are described by highlighting emerging trends and identifying some challenges that need to be addressed to enable novel, scalable and energy-efficient communications. So an architecture for Internet of Things systems is presented, the purpose of which is to minimize energy consumption by communication devices, protocols, networks, end-user systems and data centers. Some electrical devices have been designed with multiple communication interfaces, such as RF or WiFi, using open source technology; they have been analyzed under different working conditions. Some devices are programmed to communicate directly with a web server, others to communicate only with a special device that acts as a bridge between some devices and the web server. Communication parameters and device status have been changed dynamically according to different scenarios in order to have the most benefits in terms of energy cost and battery lifetime. So the way devices communicate with the web server or between each other and the way they try to obtain the information they need to be always up to date change dynamically in order to guarantee always the lowest energy consumption, a long lasting battery lifetime, the fastest responses and feedbacks and the best quality of service and communication for end users and inner devices of the system.

  3. Design of Deformation Monitoring System for Volcano Mitigation

    NASA Astrophysics Data System (ADS)

    Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal

    2016-08-01

    Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.

  4. Operational Use of OGC Web Services at the Met Office

    NASA Astrophysics Data System (ADS)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or graphs, and combined with the WMS pre-rendered images and text, in a FLEX application, to provide sophisticated, user impact-based view of the weather. The OGC web services supporting these applications have been developed in collaboration with commercial companies. Visual Weather was originally a desktop application for forecasters, but IBL have developed it to expose the full range of forecast and observation data through standard web services (WCS and WMS). Forecasts and observations relating to specific locations and geographic features are held in an Oracle Database, and exposed as a WFS using Snowflake Software's GO-Publisher application. The Met Office has worked closely with both IBL and Snowflake Software to ensure that the web services provided strike a balance between conformance to the standards and performance in an operational environment. This has proved challenging in areas where the standards are rapidly evolving (e.g. WCS) or do not allow adequate description of the Met-Ocean domain (e.g. multiple time coordinates and parametric vertical coordinates). It has also become clear that careful selection of the features to expose, based on the way in which you expect users to query those features, in necessary in order to deliver adequate performance. These experiences are providing useful 'real-world' input in to the recently launched OGC MetOcean Domain Working Group and World Meteorological Organisation (WMO) initiatives in this area.

  5. Using JavaScript and the FDSN web service to create an interactive earthquake information system

    NASA Astrophysics Data System (ADS)

    Fischer, Kasper D.

    2015-04-01

    The FDSN web service provides a web interface to access earthquake meta-data (e. g. event or station information) and waveform date over the internet. Requests are send to a server as URLs and the output is either XML or miniSEED. This makes it hard to read by humans but easy to process with different software. Different data centers are already supporting the FDSN web service, e. g. USGS, IRIS, ORFEUS. The FDSN web service is also part of the Seiscomp3 (http://www.seiscomp3.org) software. The Seismological Observatory of the Ruhr-University switched to Seiscomp3 as the standard software for the analysis of mining induced earthquakes at the beginning of 2014. This made it necessary to create a new web-based earthquake information service for the publication of results to the general public. This has be done by processing the output of a FDSN web service query by javascript running in a standard browser. The result is an interactive map presenting the observed events and further information of events and stations on a single web page as a table and on a map. In addition the user can download event information, waveform data and station data in different formats like miniSEED, quakeML or FDSNxml. The developed code and all used libraries are open source and freely available.

  6. Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in Taiwan.

    PubMed

    Chen, Hong-Ren; Tseng, Hsiao-Fen

    2012-08-01

    Web-based e-learning is not restricted by time or place and can provide teachers with a learning environment that is flexible and convenient, enabling them to efficiently learn, quickly develop their professional expertise, and advance professionally. Many research reports on web-based e-learning have neglected the role of the teacher's perspective in the acceptance of using web-based e-learning systems for in-service education. We distributed questionnaires to 402 junior high school teachers in central Taiwan. This study used the Technology Acceptance Model (TAM) as our theoretical foundation and employed the Structure Equation Model (SEM) to examine factors that influenced intentions to use in-service training conducted through web-based e-learning. The results showed that motivation to use and Internet self-efficacy were significantly positively associated with behavioral intentions regarding the use of web-based e-learning for in-service training through the factors of perceived usefulness and perceived ease of use. The factor of computer anxiety had a significantly negative effect on behavioral intentions toward web-based e-learning in-service training through the factor of perceived ease of use. Perceived usefulness and motivation to use were the primary reasons for the acceptance by junior high school teachers of web-based e-learning systems for in-service training. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Service-based analysis of biological pathways

    PubMed Central

    Zheng, George; Bouguettaya, Athman

    2009-01-01

    Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403

  8. Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

    NASA Astrophysics Data System (ADS)

    Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

    2012-12-01

    The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L. A., Killeffer, T. S., Boden, T. A., ... & Lazer, K. (2014). THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA. Oak Ridge National Laboratory (ORNL).

  9. Tobacco cessation among users of telephone and web-based interventions--four states, 2011-2012.

    PubMed

    Puckett, Mary; Neri, Antonio; Thompson, Trevor; Underwood, J Michael; Momin, Behnoosh; Kahende, Jennifer; Zhang, Lei; Stewart, Sherri L

    2015-01-02

    Smoking caused an average of 480,000 deaths per year in the United States from 2005 to 2009, and three in 10 cancer deaths in the United States are tobacco related. Tobacco cessation is a high public health priority, and all states offer some form of tobacco cessation service. Quitlines provide telephone-based counseling services and are an effective intervention for tobacco cessation. In addition to telephone services, 96% of all U.S. quitlines offer Web-based cessation services. Evidence is limited on the number of tobacco users who use more than one type of service, and studies report mixed results on whether combined telephone and Web-based counseling improves long-term cessation compared with telephone alone. CDC conducted a survey of users of telephone and Web-based cessation services in four states to determine the cessation success of users of these interventions. After adjusting for multiple variables, persons who used both telephone and Web-based services were more likely to report abstinence from smoking for 30 days at follow up (odds ratio = 1.3) compared with telephone-only users and with Web-only users (odds ratio = 1.5). These findings suggest that states might consider offering both types of cessation services to increase cessation success.

  10. CASAS: A tool for composing automatically and semantically astrophysical services

    NASA Astrophysics Data System (ADS)

    Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.

    2017-07-01

    Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.

  11. Security and Efficiency Concerns With Distributed Collaborative Networking Environments

    DTIC Science & Technology

    2003-09-01

    have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while

  12. Finding, Browsing and Getting Data Easily Using SPDF Web Services

    NASA Technical Reports Server (NTRS)

    Candey, R.; Chimiak, R.; Harris, B.; Johnson, R.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; McGuire, R.; Papitashvili, N.; hide

    2010-01-01

    The NASA GSFC Space Physics Data Facility (5PDF) provides heliophysics science-enabling information services for enhancing scientific research and enabling integration of these services into the Heliophysics Data Environment paradigm, via standards-based approach (SOAP) and Representational State Transfer (REST) web services in addition to web browser, FTP, and OPeNDAP interfaces. We describe these interfaces and the philosophies behind these web services, and show how to call them from various languages, such as IDL and Perl. We are working towards a "one simple line to call" philosophy extolled in the recent VxO discussions. Combining data from many instruments and missions enables broad research analysis and correlation and coordination with other experiments and missions.

  13. Web Services Security - Implementation and Evaluation Issues

    NASA Astrophysics Data System (ADS)

    Pimenidis, Elias; Georgiadis, Christos K.; Bako, Peter; Zorkadis, Vassilis

    Web services development is a key theme in the utilization the commercial exploitation of the semantic web. Paramount to the development and offering of such services is the issue of security features and they way these are applied in instituting trust amongst participants and recipients of the service. Implementing such security features is a major challenge to developers as they need to balance these with performance and interoperability requirements. Being able to evaluate the level of security offered is a desirable feature for any prospective participant. The authors attempt to address the issues of security requirements and evaluation criteria, while they discuss the challenges of security implementation through a simple web service application case.

  14. 15 Years of Terra, 14 Years of Application Usage

    NASA Astrophysics Data System (ADS)

    Schmaltz, J. E.; Alarcon, C.; Boller, R. A.; Cechini, M. F.; Davies, D.; Fu, G.; Gunnoe, T.; Hall, J. R.; Huang, T.; Ilavajhala, S.; Jackson, M.; King, J.; McGann, M.; Murphy, K. J.; Roberts, J. T.; Thompson, C. K.; Ye, G.

    2014-12-01

    The instruments onboard the Terra spacecraft were designed for long-term Earth science research but not long after launch it became apparent that this data and imagery could be made available in near real-time for applications users. During the year 2000 fire season in the western United States, the US Forest Service approached NASA with a request to expedite MODIS fire detections. The Rapid Response system was created to generate fire detections as well as true color imagery in both swath and geo-referenced formats. This imagery was used by a wide variety of applications, such as NASA's AERONET program, the USDA Foreign Agricultural Service, Antarctic resupply shipping, flood mapping for relief agencies, Deepwater Horizon monitoring, volcanic ash monitoring, as well as print, televised, and Internet media. From 2004, the University of Maryland's Web Fire Mapper helped distribute fire detection information in a variety of formats. However, the applications community expressed the need for near-real time access to the underlying data. This requirement led to the development of the Land Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) in 2009. To achieve the latency requirements, many components of the EOS satellite operations, ground and science processing systems had to be made more efficient. In addition, products that require ancillary data were modified to use alternate inputs. Forty Terra MODIS data products are currently available from LANCE. LANCE also includes data from other instruments including AIRS, AMSR-E, MLS, and OMI. To help near-real time users navigate this large data offering, a new imagery service was begun in 2011 - Global Imagery Browse Services (GIBS). This service provides very responsive viewing using the Web Map Tile Service protocol. These programs will continue to support and expand the use of Terra data for near-real time applications well into the future.

  15. Knowledge-driven enhancements for task composition in bioinformatics.

    PubMed

    Sutherland, Karen; McLeod, Kenneth; Ferguson, Gus; Burger, Albert

    2009-10-01

    A key application area of semantic technologies is the fast-developing field of bioinformatics. Sealife was a project within this field with the aim of creating semantics-based web browsing capabilities for the Life Sciences. This includes meaningfully linking significant terms from the text of a web page to executable web services. It also involves the semantic mark-up of biological terms, linking them to biomedical ontologies, then discovering and executing services based on terms that interest the user. A system was produced which allows a user to identify terms of interest on a web page and subsequently connects these to a choice of web services which can make use of these inputs. Elements of Artificial Intelligence Planning build on this to present a choice of higher level goals, which can then be broken down to construct a workflow. An Argumentation System was implemented to evaluate the results produced by three different gene expression databases. An evaluation of these modules was carried out on users from a variety of backgrounds. Users with little knowledge of web services were able to achieve tasks that used several services in much less time than they would have taken to do this manually. The Argumentation System was also considered a useful resource and feedback was collected on the best way to present results. Overall the system represents a move forward in helping users to both construct workflows and analyse results by incorporating specific domain knowledge into the software. It also provides a mechanism by which web pages can be linked to web services. However, this work covers a specific domain and much co-ordinated effort is needed to make all web services available for use in such a way, i.e. the integration of underlying knowledge is a difficult but essential task.

  16. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  17. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  18. Selecting a Free Web-Hosted Survey Tool for Student Use

    ERIC Educational Resources Information Center

    Elbeck, Matt

    2014-01-01

    This study provides marketing educators a review of free web-based survey services and guidance for student use. A mixed methods approach started with online searches and metrics identifying 13 free web-hosted survey services, described as demonstration or project tools, and ranked using popularity and importance web-based metrics. For each…

  19. About NOAA's National Weather Service

    Science.gov Websites

    official web portal to all federal, state and local government web resources and services. Follow the Field Offices are located across the country. Links to web sites of these components of the National that affect small businesses have established Web sites, e-mail addresses, and toll free phone numbers

  20. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

Top