Sample records for amazon web services

  1. Boverhof's App Earns Honorable Mention in Amazon's Web Services

    Science.gov Websites

    » Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday

  2. MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services.

    PubMed

    Pratt, Brian; Howbert, J Jeffry; Tasman, Natalie I; Nilsson, Erik J

    2012-01-01

    MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. brian.pratt@insilicos.com

  3. MR-Tandem: parallel X!Tandem using Hadoop MapReduce on Amazon Web Services

    PubMed Central

    Pratt, Brian; Howbert, J. Jeffry; Tasman, Natalie I.; Nilsson, Erik J.

    2012-01-01

    Summary: MR-Tandem adapts the popular X!Tandem peptide search engine to work with Hadoop MapReduce for reliable parallel execution of large searches. MR-Tandem runs on any Hadoop cluster but offers special support for Amazon Web Services for creating inexpensive on-demand Hadoop clusters, enabling search volumes that might not otherwise be feasible with the compute resources a researcher has at hand. MR-Tandem is designed to drop in wherever X!Tandem is already in use and requires no modification to existing X!Tandem parameter files, and only minimal modification to X!Tandem-based workflows. Availability and implementation: MR-Tandem is implemented as a lightly modified X!Tandem C++ executable and a Python script that drives Hadoop clusters including Amazon Web Services (AWS) Elastic Map Reduce (EMR), using the modified X!Tandem program as a Hadoop Streaming mapper and reducer. The modified X!Tandem C++ source code is Artistic licensed, supports pluggable scoring, and is available as part of the Sashimi project at http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/extern/xtandem/. The MR-Tandem Python script is Apache licensed and available as part of the Insilicos Cloud Army project at http://ica.svn.sourceforge.net/viewvc/ica/trunk/mr-tandem/. Full documentation and a windows installer that configures MR-Tandem, Python and all necessary packages are available at this same URL. Contact: brian.pratt@insilicos.com PMID:22072385

  4. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  5. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  6. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  7. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  8. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  9. Giovanni in the Cloud: Earth Science Data Exploration in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Hegde, M.; Petrenko, M.; Smit, C.; Zhang, H.; Pilone, P.; Zasorin, A. A.; Pham, L.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a popular online data exploration tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), providing 22 analysis and visualization services for over 1600 Earth Science data variables. Owing to its popularity, Giovanni has experienced a consistent growth in overall demand, with periodic usage spikes attributed to trainings by education organizations, extensive data analysis in response to natural disasters, preparations for science meetings, etc. Furthermore, the new generation of spaceborne sensors and high resolution models have resulted in an exponential growth in data volume with data distributed across the traditional boundaries of datacenters. Seamless exploration of data (without users having to worry about data center boundaries) has been a key recommendation of the GES DISC User Working Group. These factors have required new strategies for delivering acceptable performance. The cloud-based Giovanni, built on Amazon Web Services (AWS), evaluates (1) AWS native solutions to provide a scalable, serverless architecture; (2) open standards for data storage in the Cloud; (3) a cost model for operations; and (4) end-user performance. Our preliminary findings indicate that the use of serverless architecture has a potential to significantly reduce development and operational cost of Giovanni. The combination of using AWS managed services, storage of data in open standards, and schema-on-read data access strategy simplifies data access and analytics, in addition to making data more accessible to the end users of Giovanni through popular programming languages.

  10. Giovanni in the Cloud: Earth Science Data Exploration in Amazon Web Services

    NASA Technical Reports Server (NTRS)

    Petrenko, Maksym; Hegde, Mahabal; Smit, Christine; Zhang, Hailiang; Pilone, Paul; Zasorin, Andrey A.; Pham, Long

    2017-01-01

    Giovanni is an exploration tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), providing 22 analysis and visualization services for over 1600 Earth Science data variables. Owing to its popularity, Giovanni has experienced a consistent growth in overall demand, with periodic usage spikes attributed to trainings by education organizations, extensive data analysis in response to natural disasters, preparations for science meetings, etc. Furthermore, the new generation of spaceborne sensors and high resolution models have resulted in an exponential growth in data volume with data distributed across the traditional boundaries of data centers. Seamless exploration of data (without users having to worry about data center boundaries) has been a key recommendation of the GES DISC User Working Group. These factors have required new strategies for delivering acceptable performance. The cloud-based Giovanni, built on Amazon Web Services (AWS), evaluates (1) AWS native solutions to provide a scalable, serverless architecture; (2) open standards for data storage in the Cloud; (3) a cost model for operations; and (4) end-user performance. Our preliminary findings indicate that the use of serverless architecture has a potential to significantly reduce development and operational cost of Giovanni. The combination of using AWS managed services, storage of data in open standards, and schema-on-read data access strategy simplifies data access and analytics, in addition to making data more accessible to the end users of Giovanni through popular programming languages.

  11. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed

    Madhyastha, Tara M; Koh, Natalie; Day, Trevor K M; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J; Rajan, Sabreena; Woelfer, Karl A; Wolf, Jonathan; Grabowski, Thomas J

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows "in the cloud." Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster.

  12. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  13. Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?

    PubMed Central

    Madhyastha, Tara M.; Koh, Natalie; Day, Trevor K. M.; Hernández-Fernández, Moises; Kelley, Austin; Peterson, Daniel J.; Rajan, Sabreena; Woelfer, Karl A.; Wolf, Jonathan; Grabowski, Thomas J.

    2017-01-01

    The contribution of this paper is to identify and describe current best practices for using Amazon Web Services (AWS) to execute neuroimaging workflows “in the cloud.” Neuroimaging offers a vast set of techniques by which to interrogate the structure and function of the living brain. However, many of the scientists for whom neuroimaging is an extremely important tool have limited training in parallel computation. At the same time, the field is experiencing a surge in computational demands, driven by a combination of data-sharing efforts, improvements in scanner technology that allow acquisition of images with higher image resolution, and by the desire to use statistical techniques that stress processing requirements. Most neuroimaging workflows can be executed as independent parallel jobs and are therefore excellent candidates for running on AWS, but the overhead of learning to do so and determining whether it is worth the cost can be prohibitive. In this paper we describe how to identify neuroimaging workloads that are appropriate for running on AWS, how to benchmark execution time, and how to estimate cost of running on AWS. By benchmarking common neuroimaging applications, we show that cloud computing can be a viable alternative to on-premises hardware. We present guidelines that neuroimaging labs can use to provide a cluster-on-demand type of service that should be familiar to users, and scripts to estimate cost and create such a cluster. PMID:29163119

  14. The BCube Crawler: Web Scale Data and Service Discovery for EarthCube.

    NASA Astrophysics Data System (ADS)

    Lopez, L. A.; Khalsa, S. J. S.; Duerr, R.; Tayachow, A.; Mingo, E.

    2014-12-01

    Web-crawling, a core component of the NSF-funded BCube project, is researching and applying the use of big data technologies to find and characterize different types of web services, catalog interfaces, and data feeds such as the ESIP OpenSearch, OGC W*S, THREDDS, and OAI-PMH that describe or provide access to scientific datasets. Given the scale of the Internet, which challenges even large search providers such as Google, the BCube plan for discovering these web accessible services is to subdivide the problem into three smaller, more tractable issues. The first, to be able to discover likely sites where relevant data and data services might be found, the second, to be able to deeply crawl the sites discovered to find any data and services which might be present. Lastly, to leverage the use of semantic technologies to characterize the services and data found, and to filter out everything but those relevant to the geosciences. To address the first two challenges BCube uses an adapted version of Apache Nutch (which originated Hadoop), a web scale crawler, and Amazon's ElasticMapReduce service for flexibility and cost effectiveness. For characterization of the services found, BCube is examining existing web service ontologies for their applicability to our needs and will re-use and/or extend these in order to query for services with specific well-defined characteristics in scientific datasets such as the use of geospatial namespaces. The original proposal for the crawler won a grant from Amazon's academic program, which allowed us to become operational; we successfully tested the Bcube Crawler at web scale obtaining a significant corpus, sizeable enough to enable work on characterization of the services and data found. There is still plenty of work to be done, doing "smart crawls" by managing the frontier, developing and enhancing our scoring algorithms and fully implementing the semantic characterization technologies. We describe the current status of the project

  15. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  16. Low cost, scalable proteomics data analysis using Amazon's cloud computing services and open source search algorithms.

    PubMed

    Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N

    2009-06-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).

  17. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same

  18. Web services for ecosystem services management and poverty alleviation

    NASA Astrophysics Data System (ADS)

    Buytaert, W.; Baez, S.; Veliz Rosas, C.

    2011-12-01

    Over the last decades, near real-time environmental observation, technical advances in computer power and cyber-infrastructure, and the development of environmental software algorithms have increased dramatically. The integration of these evolutions is one of the major challenges of the next decade for environmental sciences. Worldwide, many coordinated activities are ongoing to make this integration a reality. However, far less attention is paid to the question of how these developments can benefit environmental services management in a poverty alleviation context. Such projects are typically faced with issues of large predictive uncertainties, limited resources, limited local scientific capacity. At the same time, the complexity of the socio-economic contexts requires a very strong bottom-up oriented and interdisciplinary approach to environmental data collection and processing. Here, we present the results of two projects on integrated environmental monitoring and scenario analysis aimed at poverty alleviation in the Peruvian Andes and Amazon. In the upper Andean highlands, farmers are monitoring the water cycle of headwater catchments to analyse the impact of land-use changes on stream flow and potential consequences for downstream irrigation. In the Amazon, local communities are monitoring the dynamics of turtle populations and their relations with river levels. In both cases, the use of online databases and web processing services enable real-time analysis of the data and scenario analysis. The system provides both physical and social indicators to assess the impact of land-use management options on local socio-economic development.

  19. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  20. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  1. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    PubMed

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  2. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  3. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  4. Low Cost, Scalable Proteomics Data Analysis Using Amazon's Cloud Computing Services and Open Source Search Algorithms

    PubMed Central

    Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.

    2009-01-01

    One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578

  5. The economic value of the climate regulation ecosystem service provided by the Amazon rainforest

    NASA Astrophysics Data System (ADS)

    Heil Costa, Marcos; Pires, Gabrielle; Fontes, Vitor; Brumatti, Livia

    2017-04-01

    The rainy Amazon climate allowed important activities to develop in the region as large rainfed agricultural lands and hydropower plants. The Amazon rainforest is an important source of moisture to the regional atmosphere and helps regulate the local climate. The replacement of forest by agricultural lands decreases the flux of water vapor into the atmosphere and changes the precipitation patterns, which may severely affect such economic activities. Assign an economic value to this ecosystem service may emphasize the significance to preserve the Amazon rainforest. In this work, we provide a first approximation of the quantification of the climate regulation ecosystem service provided by the Amazon rainforest using the marginal production method. We use climate scenarios derived from Amazon deforestation scenarios as input to crop and runoff models to assess how land use change would affect agriculture and hydropower generation. The effects of forest removal on soybean production and on cattle beef production can both be as high as US 16 per year per ha deforested, and the effects on hydropower generation can be as high as US 8 per year per ha deforested. We consider this as a conservative estimate of a permanent service provided by the rainforest. Policy makers and other Amazon agriculture and energy businesses must be aware of these numbers, and consider them while planning their activities.

  6. MedlinePlus Connect: Web Service

    MedlinePlus

    ... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service ...

  7. Dynamic selection mechanism for quality of service aware web services

    NASA Astrophysics Data System (ADS)

    D'Mello, Demian Antony; Ananthanarayana, V. S.

    2010-02-01

    A web service is an interface of the software component that can be accessed by standard Internet protocols. The web service technology enables an application to application communication and interoperability. The increasing number of web service providers throughout the globe have produced numerous web services providing the same or similar functionality. This necessitates the use of tools and techniques to search the suitable services available over the Web. UDDI (universal description, discovery and integration) is the first initiative to find the suitable web services based on the requester's functional demands. However, the requester's requirements may also include non-functional aspects like quality of service (QoS). In this paper, the authors define a QoS model for QoS aware and business driven web service publishing and selection. The authors propose a QoS requirement format for the requesters, to specify their complex demands on QoS for the web service selection. The authors define a tree structure called quality constraint tree (QCT) to represent the requester's variety of requirements on QoS properties having varied preferences. The paper proposes a QoS broker based architecture for web service selection, which facilitates the requesters to specify their QoS requirements to select qualitatively optimal web service. A web service selection algorithm is presented, which ranks the functionally similar web services based on the degree of satisfaction of the requester's QoS requirements and preferences. The paper defines web service provider qualities to distinguish qualitatively competitive web services. The paper also presents the modelling and selection mechanism for the requester's alternative constraints defined on the QoS. The authors implement the QoS broker based system to prove the correctness of the proposed web service selection mechanism.

  8. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  9. Task 28: Web Accessible APIs in the Cloud Trade Study

    NASA Technical Reports Server (NTRS)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  10. The EMBRACE web service collection

    PubMed Central

    Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862

  11. Biomedical cloud computing with Amazon Web Services.

    PubMed

    Fusaro, Vincent A; Patil, Prasad; Gafni, Erik; Wall, Dennis P; Tonellato, Peter J

    2011-08-01

    In this overview to biomedical computing in the cloud, we discussed two primary ways to use the cloud (a single instance or cluster), provided a detailed example using NGS mapping, and highlighted the associated costs. While many users new to the cloud may assume that entry is as straightforward as uploading an application and selecting an instance type and storage options, we illustrated that there is substantial up-front effort required before an application can make full use of the cloud's vast resources. Our intention was to provide a set of best practices and to illustrate how those apply to a typical application pipeline for biomedical informatics, but also general enough for extrapolation to other types of computational problems. Our mapping example was intended to illustrate how to develop a scalable project and not to compare and contrast alignment algorithms for read mapping and genome assembly. Indeed, with a newer aligner such as Bowtie, it is possible to map the entire African genome using one m2.2xlarge instance in 48 hours for a total cost of approximately $48 in computation time. In our example, we were not concerned with data transfer rates, which are heavily influenced by the amount of available bandwidth, connection latency, and network availability. When transferring large amounts of data to the cloud, bandwidth limitations can be a major bottleneck, and in some cases it is more efficient to simply mail a storage device containing the data to AWS (http://aws.amazon.com/importexport/). More information about cloud computing, detailed cost analysis, and security can be found in references.

  12. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services.

    PubMed

    Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T

    2014-09-10

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.

  13. Biological Web Service Repositories Review

    PubMed Central

    Urdidiales‐Nieto, David; Navas‐Delgado, Ismael

    2016-01-01

    Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459

  14. Assessing the Amazon Cloud Suitability for CLARREO's Computational Needs

    NASA Technical Reports Server (NTRS)

    Goldin, Daniel; Vakhnin, Andrei A.; Currey, Jon C.

    2015-01-01

    In this document we compare the performance of the Amazon Web Services (AWS), also known as Amazon Cloud, with the CLARREO (Climate Absolute Radiance and Refractivity Observatory) cluster and assess its suitability for computational needs of the CLARREO mission. A benchmark executable to process one month and one year of PARASOL (Polarization and Anistropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) data was used. With the optimal AWS configuration, adequate data-processing times, comparable to the CLARREO cluster, were found. The assessment of alternatives to the CLARREO cluster continues and several options, such as a NASA-based cluster, are being considered.

  15. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services

    PubMed Central

    Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.

    2014-01-01

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933

  16. Biological Web Service Repositories Review.

    PubMed

    Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F

    2017-05-01

    Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  17. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  18. The spatial extent of change in tropical forest ecosystem services in the Amazon delta

    NASA Astrophysics Data System (ADS)

    de Araujo Barbosa, C. C.; Atkinson, P.; Dearing, J.

    2014-12-01

    Deltas hold major economic potential due their strategic location, close to seas and inland waterways, thereby supporting intense economic activity. The increasing pace of human development activities in coastal deltas over the past five decades has also strained environmental resources and produced extensive economic and sociocultural impacts. The Amazon delta is located in the Amazon Basin, North Brazil, the largest river basin on Earth and also one of the least understood. A considerable segment of the population living in the Amazon delta is directly dependent on the local extraction of natural resources for their livelihood. Areas sparsely inhabited may be exploited with few negative consequences for the environment. However, increasing pressure on ecosystem services is amplified by large fluxes of immigrants from other parts of the country, especially from the semi-arid zone in Northeast Brazil to the lowland forests of the Amazon delta. Here we present partial results from a bigger research project. Therefore, the focus will be on presenting an overview of the current state, and the extent of changes on forest related ecosystem services in the Amazon delta over the last three decades. We aggregated a multitude of datasets, from a variety of sources, for example, from satellite imagery such as the Advanced Very High Resolution Radiometer (AVHRR), the Global Inventory Modelling and Mapping Studies (GIMMS), the Moderate Resolution Imaging Spectroradiometer (MODIS), and climate datasets at meteorological station level from the Brazilian National Institute of Meteorology (INMET) and social and economic statistics data from the Brazilian Institute of Geography and Statistics (IBGE) and from the Brazilian Institute of Applied Economic Research (IPEA). Through analysis of socioeconomic and satellite earth observation data we were able to produce and present spatially-explicit information with the current state and transition in forest cover and its impacts to forest

  19. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    NASA Astrophysics Data System (ADS)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  20. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  1. Enhancing UCSF Chimera through web services

    PubMed Central

    Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.

    2014-01-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624

  2. Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.

    2010-12-01

    We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for

  3. Similarity Based Semantic Web Service Match

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Niu, Wenjia; Huang, Ronghuai

    Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.

  4. Enhancing UCSF Chimera through web services.

    PubMed

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Business as Usual: Amazon.com and the Academic Library

    ERIC Educational Resources Information Center

    Van Ullen, Mary K.; Germain, Carol Anne

    2002-01-01

    In 1999, Steve Coffman proposed that libraries form a single interlibrary loan based entity patterned after Amazon.com. This study examined the suitability of Amazon.com's Web interface and record enhancements for academic libraries. Amazon.com could not deliver circulating monographs in the University at Albany Libraries' collection quickly…

  6. Domain-specific Web Service Discovery with Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Caverlee, J; Liu, L

    2005-02-14

    This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less

  7. WebGLORE: a web service for Grid LOgistic REgression.

    PubMed

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-12-15

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.

  8. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  9. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  10. WebGIS based on semantic grid model and web services

    NASA Astrophysics Data System (ADS)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by

  11. WebGLORE: a Web service for Grid LOgistic REgression

    PubMed Central

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. Availability and implementation: http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation. Contact: x1jiang@ucsd.edu PMID:24072732

  12. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  13. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data Via Hyrax Server Data Server

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Gallagher, James; Jelenak, Aleksandar; Potter, Nathan; Lee, Joe; Yang, Kent

    2017-01-01

    This study explored three candidate architectures with different types of objects and access paths for serving NASA Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the study were: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and netCDF4 data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3). Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.We will describe the three architectures and the use cases along with performance results and recommendations for further work.

  14. Flexible Web services integration: a novel personalised social approach

    NASA Astrophysics Data System (ADS)

    Metrouh, Abdelmalek; Mokhati, Farid

    2018-05-01

    Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.

  15. A Web service substitution method based on service cluster nets

    NASA Astrophysics Data System (ADS)

    Du, YuYue; Gai, JunJing; Zhou, MengChu

    2017-11-01

    Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.

  16. Web Services--A Buzz Word with Potentials

    Treesearch

    János T. Füstös

    2006-01-01

    The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...

  17. The Organizational Role of Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2011-01-01

    The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…

  18. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  19. BioSWR--semantic web services registry for bioinformatics.

    PubMed

    Repchevsky, Dmitry; Gelpi, Josep Ll

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  20. Web service module for access to g-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.; Goranov, G.

    2012-10-01

    G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.

  1. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.

    PubMed

    Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-09-23

    SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the

  2. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing

  3. Web service discovery among large service pools utilising semantic similarity and clustering

    NASA Astrophysics Data System (ADS)

    Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli

    2017-03-01

    With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.

  4. Analysis Tool Web Services from the EMBL-EBI.

    PubMed

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  5. Analysis Tool Web Services from the EMBL-EBI

    PubMed Central

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-01-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338

  6. Data as a Service: A Seismic Web Service Pipeline

    NASA Astrophysics Data System (ADS)

    Martinez, E.

    2016-12-01

    Publishing data as a service pipeline provides an improved, dynamic approach over static data archives. A service pipeline is a collection of micro web services that each perform a specific task and expose the results of that task. Structured request/response formats allow micro web services to be chained together into a service pipeline to provide more complex results. The U.S. Geological Survey adopted service pipelines to publish seismic hazard and design data supporting both specific and generalized audiences. The seismic web service pipeline starts at source data and exposes probability and deterministic hazard curves, response spectra, risk-targeted ground motions, and seismic design provision metadata. This pipeline supports public/private organizations and individual engineers/researchers. Publishing data as a service pipeline provides a variety of benefits. Exposing the component services enables advanced users to inspect or use the data at each processing step. Exposing a composite service enables new users quick access to published data with a very low barrier to entry. Advanced users may re-use micro web services by chaining them in new ways or injecting new micros services into the pipeline. This allows the user to test hypothesis and compare their results to published results. Exposing data at each step in the pipeline enables users to review and validate the data and process more quickly and accurately. Making the source code open source, per USGS policy, further enables this transparency. Each micro service may be scaled independent of any other micro service. This ensures data remains available and timely in a cost-effective manner regardless of load. Additionally, if a new or more efficient approach to processing the data is discovered, this new approach may replace the old approach at any time, keeping the pipeline running while not affecting other micro services.

  7. Managing the Web-Enhanced Geographic Information Service.

    ERIC Educational Resources Information Center

    Stephens, Denise

    1997-01-01

    Examines key management issues involved in delivering geographic information services on the World Wide Web, using the Geographic Information Center (GIC) program at the University of Virginia Library as a reference. Highlights include integrating the Web into services; building collections for Web delivery; and evaluating spatial information…

  8. Opal web services for biomedical applications.

    PubMed

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  9. jORCA: easily integrating bioinformatics Web Services.

    PubMed

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  10. Using EMBL-EBI services via Web interface and programmatically via Web Services

    PubMed Central

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2015-01-01

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941

  11. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  12. Persistence and availability of Web services in computational biology.

    PubMed

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  13. Persistence and Availability of Web Services in Computational Biology

    PubMed Central

    Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383

  14. Model My Watershed and BiG CZ Data Portal: Interactive geospatial analysis and hydrological modeling web applications that leverage the Amazon cloud for scientists, resource managers and students

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Tarboton, D. G.; Sazib, N. S.; Horsburgh, J. S.; Cheetham, R.

    2016-12-01

    The Model My Watershed Web app (http://wikiwatershed.org/model/) was designed to enable citizens, conservation practitioners, municipal decision-makers, educators, and students to interactively select any area of interest anywhere in the continental USA to: (1) analyze real land use and soil data for that area; (2) model stormwater runoff and water-quality outcomes; and (3) compare how different conservation or development scenarios could modify runoff and water quality. The BiG CZ Data Portal is a web application for scientists for intuitive, high-performance map-based discovery, visualization, access and publication of diverse earth and environmental science data via a map-based interface that simultaneously performs geospatial analysis of selected GIS and satellite raster data for a selected area of interest. The two web applications share a common codebase (https://github.com/WikiWatershed and https://github.com/big-cz), high performance geospatial analysis engine (http://geotrellis.io/ and https://github.com/geotrellis) and deployment on the Amazon Web Services (AWS) cloud cyberinfrastructure. Users can use "on-the-fly" rapid watershed delineation over the national elevation model to select their watershed or catchment of interest. The two web applications also share the goal of enabling the scientists, resource managers and students alike to share data, analyses and model results. We will present these functioning web applications and their potential to substantially lower the bar for studying and understanding our water resources. We will also present work in progress, including a prototype system for enabling citizen-scientists to register open-source sensor stations (http://envirodiy.org/mayfly/) to stream data into these systems, so that they can be reshared using Water One Flow web services.

  15. BioSWR – Semantic Web Services Registry for Bioinformatics

    PubMed Central

    Repchevsky, Dmitry; Gelpi, Josep Ll.

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118

  16. Pragmatic Computing - A Semiotic Perspective to Web Services

    NASA Astrophysics Data System (ADS)

    Liu, Kecheng

    The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.

  17. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  18. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  19. Designing Crop Simulation Web Service with Service Oriented Architecture Principle

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.

    2015-12-01

    Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting

  20. Web Services as Public Services: Are We Supporting Our Busiest Service Point?

    ERIC Educational Resources Information Center

    Riley-Huff, Debra A.

    2009-01-01

    This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)

  1. Using ESO Reflex with Web Services

    NASA Astrophysics Data System (ADS)

    Järveläinen, P.; Savolainen, V.; Oittinen, T.; Maisala, S.; Ullgrén, M. Hook, R.

    2008-08-01

    ESO Reflex is a prototype graphical workflow system, based on Taverna, and primarily intended to be a flexible way of running ESO data reduction recipes along with other legacy applications and user-written tools. ESO Reflex can also readily use the Taverna Web Services features that are based on the Apache Axis SOAP implementation. Taverna is a general purpose Web Service client, and requires no programming to use such services. However, Taverna also has some restrictions: for example, no numerical types such integers. In addition the preferred binding style is document/literal wrapped, but most astronomical services publish the Axis default WSDL using RPC/encoded style. Despite these minor limitations we have created simple but very promising test VO workflow using the Sesame name resolver service at CDS Strasbourg, the Hubble SIAP server at the Multi-Mission Archive at Space Telescope (MAST) and the WESIX image cataloging and catalogue cross-referencing service at the University of Pittsburgh. ESO Reflex can also pass files and URIs via the PLASTIC protocol to visualisation tools and has its own viewer for VOTables. We picked these three Web Services to try to set up a realistic and useful ESO Reflex workflow. They also demonstrate ESO Reflex abilities to use many kind of Web Services because each of them requires a different interface. We describe each of these services in turn and comment on how it was used

  2. Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.

    PubMed

    Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish

    2014-12-12

    The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.

  3. Focused Crawling of the Deep Web Using Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  4. Efficiently Selecting the Best Web Services

    NASA Astrophysics Data System (ADS)

    Goncalves, Marlene; Vidal, Maria-Esther; Regalado, Alfredo; Yacoubi Ayadi, Nadia

    Emerging technologies and linking data initiatives have motivated the publication of a large number of datasets, and provide the basis for publishing Web services and tools to manage the available data. This wealth of resources opens a world of possibilities to satisfy user requests. However, Web services may have similar functionality and assess different performance; therefore, it is required to identify among the Web services that satisfy a user request, the ones with the best quality. In this paper we propose a hybrid approach that combines reasoning tasks with ranking techniques to aim at the selection of the Web services that best implement a user request. Web service functionalities are described in terms of input and output attributes annotated with existing ontologies, non-functionality is represented as Quality of Services (QoS) parameters, and user requests correspond to conjunctive queries whose sub-goals impose restrictions on the functionality and quality of the services to be selected. The ontology annotations are used in different reasoning tasks to infer service implicit properties and to augment the size of the service search space. Furthermore, QoS parameters are considered by a ranking metric to classify the services according to how well they meet a user non-functional condition. We assume that all the QoS parameters of the non-functional condition are equally important, and apply the Top-k Skyline approach to select the k services that best meet this condition. Our proposal relies on a two-fold solution which fires a deductive-based engine that performs different reasoning tasks to discover the services that satisfy the requested functionality, and an efficient implementation of the Top-k Skyline approach to compute the top-k services that meet the majority of the QoS constraints. Our Top-k Skyline solution exploits the properties of the Skyline Frequency metric and identifies the top-k services by just analyzing a subset of the services that

  5. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  6. The impact of web services at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.

    2015-12-01

    The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability

  7. Building a Better Web Site: A Practical Guide to Interactivity for Libraries.

    ERIC Educational Resources Information Center

    Braun, Linda W.

    1998-01-01

    Describes selected commercial and academic Web sites providing interactive services (Amazon; Jones Library, Amherst, MA; Pine Crest Lower School, Ft. Lauderdale, FL; Barnes & Noble; Cal State's Information Literacy Tutorials; PBS's techknow site; K.I.D.S. Report), and argues that libraries that stop at links and policy statements miss…

  8. An Architecture for Autonomic Web Service Process Planning

    NASA Astrophysics Data System (ADS)

    Moore, Colm; Xue Wang, Ming; Pahl, Claus

    Web service composition is a technology that has received considerable attention in the last number of years. Languages and tools to aid in the process of creating composite Web services have been received specific attention. Web service composition is the process of linking single Web services together in order to accomplish more complex tasks. One area of Web service composition that has not received as much attention is the area of dynamic error handling and re-planning, enabling autonomic composition. Given a repository of service descriptions and a task to complete, it is possible for AI planners to automatically create a plan that will achieve this goal. If however a service in the plan is unavailable or erroneous the plan will fail. Motivated by this problem, this paper suggests autonomous re-planning as a means to overcome dynamic problems. Our solution involves automatically recovering from faults and creating a context-dependent alternate plan. We present an architecture that serves as a basis for the central activities autonomous composition, monitoring and fault handling.

  9. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  10. Building asynchronous geospatial processing workflows with web services

    NASA Astrophysics Data System (ADS)

    Zhao, Peisheng; Di, Liping; Yu, Genong

    2012-02-01

    Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.

  11. Proposal for a Web Encoding Service (wes) for Spatial Data Transactio

    NASA Astrophysics Data System (ADS)

    Siew, C. B.; Peters, S.; Rahman, A. A.

    2015-10-01

    Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.

  12. The ViennaRNA web services.

    PubMed

    Gruber, Andreas R; Bernhart, Stephan H; Lorenz, Ronny

    2015-01-01

    The ViennaRNA package is a widely used collection of programs for thermodynamic RNA secondary structure prediction. Over the years, many additional tools have been developed building on the core programs of the package to also address issues related to noncoding RNA detection, RNA folding kinetics, or efficient sequence design considering RNA-RNA hybridizations. The ViennaRNA web services provide easy and user-friendly web access to these tools. This chapter describes how to use this online platform to perform tasks such as prediction of minimum free energy structures, prediction of RNA-RNA hybrids, or noncoding RNA detection. The ViennaRNA web services can be used free of charge and can be accessed via http://rna.tbi.univie.ac.at.

  13. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  14. Web services in the U.S. geological survey streamstats web application

    USGS Publications Warehouse

    Guthrie, J.D.; Dartiguenave, C.; Ries, Kernell G.

    2009-01-01

    StreamStats is a U.S. Geological Survey Web-based GIS application developed as a tool for waterresources planning and management, engineering design, and other applications. StreamStats' primary functionality allows users to obtain drainage-basin boundaries, basin characteristics, and streamflow statistics for gaged and ungaged sites. Recently, Web services have been developed that provide the capability to remote users and applications to access comprehensive GIS tools that are available in StreamStats, including delineating drainage-basin boundaries, computing basin characteristics, estimating streamflow statistics for user-selected locations, and determining point features that coincide with a National Hydrography Dataset (NHD) reach address. For the state of Kentucky, a web service also has been developed that provides users the ability to estimate daily time series of drainage-basin average values of daily precipitation and temperature. The use of web services allows the user to take full advantage of the datasets and processes behind the Stream Stats application without having to develop and maintain them. ?? 2009 IEEE.

  15. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  16. Web Services Security - Implementation and Evaluation Issues

    NASA Astrophysics Data System (ADS)

    Pimenidis, Elias; Georgiadis, Christos K.; Bako, Peter; Zorkadis, Vassilis

    Web services development is a key theme in the utilization the commercial exploitation of the semantic web. Paramount to the development and offering of such services is the issue of security features and they way these are applied in instituting trust amongst participants and recipients of the service. Implementing such security features is a major challenge to developers as they need to balance these with performance and interoperability requirements. Being able to evaluate the level of security offered is a desirable feature for any prospective participant. The authors attempt to address the issues of security requirements and evaluation criteria, while they discuss the challenges of security implementation through a simple web service application case.

  17. New Interfaces to Web Documents and Services

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This paper reports on investigations into how to extend capabilities of the Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1996 Summer Faculty Fellowship program, and involved research into and prototype development of software components that provide documents and services for the World Wide Web (WWW). The WWW has become a de-facto standard for sharing resources over the internet, primarily because web browsers are freely available for the most common hardware platforms and their operating systems. As a consequence of the popularity of the internet, tools, and techniques associated with web browsers are changing rapidly. New capabilities are offered by companies that support web browsers in order to achieve or remain a dominant participant in internet services. Because a goal of the VRC is to build an environment for NASA centers, universities, and industrial partners to share information associated with Advanced Concepts Office activities, the VRC tracks new techniques and services associated with the web in order to determine the their usefulness for distributed and collaborative engineering research activities. Most recently, Java has emerged as a new tool for providing internet services. Because the major web browser providers have decided to include Java in their software, investigations into Java were conducted this summer.

  18. Pragmatic service development and customisation with the CEDA OGC Web Services framework

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Stephens, Ag; Lowe, Dominic

    2010-05-01

    The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. We have found this approach useful for exploring evolving standards and implementing protocol extensions to meet the requirements of operational deployments. This paper outlines how COWS is being used to implement customised WMS, WCS, WFS and WPS services in a variety of web applications from experimental prototypes to load-balanced cluster deployments serving 10-100 simultaneous users. In particular we will cover 1) The use of Climate Science Modeling Language (CSML) in complex-feature aware WMS, WCS and WFS services, 2) Extending WMS to support applications with features specific to earth system science and 3) A cluster-enabled Web Processing Service (WPS) supporting asynchronous data processing. The COWS WPS underpins all backend services in the UK Climate Projections User Interface where users can extract, plot and further process outputs from a multi-dimensional probabilistic climate model dataset. The COWS WPS supports cluster job execution, result caching, execution time estimation and user management. The COWS WMS and WCS components drive the project-specific NCEO and QESDI portals developed by the British Atmospheric Data Centre. These portals use CSML as a backend description format and implement features such as multiple WMS layer dimensions and climatology axes that are beyond the scope of general purpose GIS tools and yet vital for atmospheric science applications.

  19. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    PubMed Central

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  20. NASA Enterprise Managed Cloud Computing (EMCC): Delivering an Initial Operating Capability (IOC) for NASA use of Commercial Infrastructure-as-a-Service (IaaS)

    NASA Technical Reports Server (NTRS)

    O'Brien, Raymond

    2017-01-01

    In 2016, Ames supported the NASA CIO in delivering an initial operating capability for Agency use of commercial cloud computing. This presentation provides an overview of the project, the services approach followed, and the major components of the capability that was delivered. The presentation is being given at the request of Amazon Web Services to a contingent representing the Brazilian Federal Government and Defense Organization that is interested in the use of Amazon Web Services (AWS). NASA is currently a customer of AWS and delivered the Initial Operating Capability using AWS as its first commercial cloud provider. The IOC, however, designed to also support other cloud providers in the future.

  1. Web servicing the biological office.

    PubMed

    Szugat, Martin; Güttler, Daniel; Fundel, Katrin; Sohler, Florian; Zimmer, Ralf

    2005-09-01

    Biologists routinely use Microsoft Office applications for standard analysis tasks. Despite ubiquitous internet resources, information needed for everyday work is often not directly and seamlessly available. Here we describe a very simple and easily extendable mechanism using Web Services to enrich standard MS Office applications with internet resources. We demonstrate its capabilities by providing a Web-based thesaurus for biological objects, which maps names to database identifiers and vice versa via an appropriate synonym list. The client application ProTag makes these features available in MS Office applications using Smart Tags and Add-Ins. http://services.bio.ifi.lmu.de/prothesaurus/

  2. Earth Science Mining Web Services

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.

    2008-12-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  3. Earth Science Mining Web Services

    NASA Technical Reports Server (NTRS)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  4. WIWS: a protein structure bioinformatics Web service collection.

    PubMed

    Hekkelman, M L; Te Beek, T A H; Pettifer, S R; Thorne, D; Attwood, T K; Vriend, G

    2010-07-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of programmatically accessible web services. We report here a collection of WHAT IF-based protein structure bioinformatics web services: these relate to structure quality, the use of symmetry in crystal structures, structure correction and optimization, adding hydrogens and optimizing hydrogen bonds and a series of geometric calculations. The freely accessible web services are based on the industry standard WS-I profile and the EMBRACE technical guidelines, and are available via both REST and SOAP paradigms. The web services run on a dedicated computational cluster; their function and availability is monitored daily.

  5. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  6. Technical Services and the World Wide Web.

    ERIC Educational Resources Information Center

    Scheschy, Virginia M.

    The World Wide Web and browsers such as Netscape and Mosaic have simplified access to electronic resources. Today, technical services librarians can share in the wealth of information available on the Web. One of the premier Web sites for acquisitions librarians is AcqWeb, a cousin of the AcqNet listserv. In addition to interesting news items,…

  7. A Generic Evaluation Model for Semantic Web Services

    NASA Astrophysics Data System (ADS)

    Shafiq, Omair

    Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.

  8. SIDECACHE: Information access, management and dissemination framework for web services.

    PubMed

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  9. A Service Value Model for Continued Use of Online Services: Conceptual Development and Empirical Examination

    ERIC Educational Resources Information Center

    Hu, Tao

    2009-01-01

    Online services (OLS) provide billions of Internet users with a variety of opportunities to exchange goods, share information, and develop or maintain relationships. Popular examples of OLS web sites include eBay.com, Amazon.com, Dell.com, Craigslist.com, MSN.com, Yahoo.com, LinkedIn.com, Zillow.com, Facebook.com, Wikipedia.com, and Twitter.com.…

  10. EnviroAtlas National Layers Master Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes layers depicting EnviroAtlas national metrics mapped at the 12-digit HUC within the conterminous United States. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  11. Amazon Forest maintenance as a source of environmental services.

    PubMed

    Fearnside, Philip M

    2008-03-01

    Amazonian forest produces environmental services such as maintenance of biodiversity, water cycling and carbon stocks. These services have a much greater value to human society than do the timber, beef and other products that are obtained by destroying the forest. Yet institutional mechanisms are still lacking to transform the value of the standing forest into the foundation of an economy based on maintaining rather than destroying this ecosystem. Forest management for commodities such as timber and non-timber forest products faces severe limitations and inherent contradictions unless income is supplemented based on environmental services. Amazon forest is threatened by deforestation, logging, forest fires and climate change. Measures to avoid deforestation include repression through command and control, creation of protected areas, and reformulation of infrastructure decisions and development policies. An economy primarily based on the value of environmental services is essential for long-term maintenance of the forest. Much progress has been made in the decades since I first proposed such a transition, but many issues also remain unresolved. These include theoretical issues regarding accounting procedures, improved quantification of the services and of the benefits of different policy options, and effective uses of the funds generated in ways that maintain both the forest and the human population.

  12. Automatic geospatial information Web service composition based on ontology interface matching

    NASA Astrophysics Data System (ADS)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  13. Unifying Access to National Hydrologic Data Repositories via Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.

    2006-12-01

    The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of

  14. Web-based health services and clinical decision support.

    PubMed

    Jegelevicius, Darius; Marozas, Vaidotas; Lukosevicius, Arunas; Patasius, Martynas

    2004-01-01

    The purpose of this study was the development of a Web-based e-health service for comprehensive assistance and clinical decision support. The service structure consists of a Web server, a PHP-based Web interface linked to a clinical SQL database, Java applets for interactive manipulation and visualization of signals and a Matlab server linked with signal and data processing algorithms implemented by Matlab programs. The service ensures diagnostic signal- and image analysis-sbased clinical decision support. By using the discussed methodology, a pilot service for pathology specialists for automatic calculation of the proliferation index has been developed. Physicians use a simple Web interface for uploading the pictures under investigation to the server; subsequently a Java applet interface is used for outlining the region of interest and, after processing on the server, the requested proliferation index value is calculated. There is also an "expert corner", where experts can submit their index estimates and comments on particular images, which is especially important for system developers. These expert evaluations are used for optimization and verification of automatic analysis algorithms. Decision support trials have been conducted for ECG and ophthalmology ultrasonic investigations of intraocular tumor differentiation. Data mining algorithms have been applied and decision support trees constructed. These services are under implementation by a Web-based system too. The study has shown that the Web-based structure ensures more effective, flexible and accessible services compared with standalone programs and is very convenient for biomedical engineers and physicians, especially in the development phase.

  15. No Free Lunch - Trading Away Ecosystem Services from Agriculture in the Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Zaks, D.; Foley, J.

    2008-12-01

    In the age of globalization, many crops and animal products are transported across the long distances for consumption elsewhere. The alteration of water, soil and climate systems from agricultural practices can be attributed to both exporting and importing countries. Quantities of water, carbon and nutrients (e.g. nitrogen and phosphorus) can be tracked throughout the production process and be aggregated from field to table. The synthesis of this data can be used to inform markets to appropriately price the most ecologically efficient production.While agricultural land is undergoing changes around the world, the Brazilian Amazon has seen a dramatic conversion of forest and grassland due to the expanding agricultural frontier, and intense growth in the future has been predicted in the region. As a proof of concept, I plan to study the flow of ecosystem services from the Amazon rainforest basin to the world market. Cattle and soybeans are the two main agricultural products of the region and are produced for both internal consumption and for export. This work quantifies agricultural production and its associated ecosystem services using socio-economic and commodity trade data, numerical ecosystem models and remote sensing products.

  16. Protecting Database Centric Web Services against SQL/XPath Injection Attacks

    NASA Astrophysics Data System (ADS)

    Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique

    Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.

  17. User Needs of Digital Service Web Portals: A Case Study

    ERIC Educational Resources Information Center

    Heo, Misook; Song, Jung-Sook; Seol, Moon-Won

    2013-01-01

    The authors examined the needs of digital information service web portal users. More specifically, the needs of Korean cultural portal users were examined as a case study. The conceptual framework of a web-based portal is that it is a complex, web-based service application with characteristics of information systems and service agents. In…

  18. OntoGene web services for biomedical text mining.

    PubMed

    Rinaldi, Fabio; Clematide, Simon; Marques, Hernani; Ellendorff, Tilia; Romacker, Martin; Rodriguez-Esteban, Raul

    2014-01-01

    Text mining services are rapidly becoming a crucial component of various knowledge management pipelines, for example in the process of database curation, or for exploration and enrichment of biomedical data within the pharmaceutical industry. Traditional architectures, based on monolithic applications, do not offer sufficient flexibility for a wide range of use case scenarios, and therefore open architectures, as provided by web services, are attracting increased interest. We present an approach towards providing advanced text mining capabilities through web services, using a recently proposed standard for textual data interchange (BioC). The web services leverage a state-of-the-art platform for text mining (OntoGene) which has been tested in several community-organized evaluation challenges,with top ranked results in several of them.

  19. OntoGene web services for biomedical text mining

    PubMed Central

    2014-01-01

    Text mining services are rapidly becoming a crucial component of various knowledge management pipelines, for example in the process of database curation, or for exploration and enrichment of biomedical data within the pharmaceutical industry. Traditional architectures, based on monolithic applications, do not offer sufficient flexibility for a wide range of use case scenarios, and therefore open architectures, as provided by web services, are attracting increased interest. We present an approach towards providing advanced text mining capabilities through web services, using a recently proposed standard for textual data interchange (BioC). The web services leverage a state-of-the-art platform for text mining (OntoGene) which has been tested in several community-organized evaluation challenges, with top ranked results in several of them. PMID:25472638

  20. Automated geospatial Web Services composition based on geodata quality requirements

    NASA Astrophysics Data System (ADS)

    Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael

    2012-10-01

    Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.

  1. Unipept web services for metaproteomics analysis.

    PubMed

    Mesuere, Bart; Willems, Toon; Van der Jeugt, Felix; Devreese, Bart; Vandamme, Peter; Dawyndt, Peter

    2016-06-01

    Unipept is an open source web application that is designed for metaproteomics analysis with a focus on interactive datavisualization. It is underpinned by a fast index built from UniProtKB and the NCBI taxonomy that enables quick retrieval of all UniProt entries in which a given tryptic peptide occurs. Unipept version 2.4 introduced web services that provide programmatic access to the metaproteomics analysis features. This enables integration of Unipept functionality in custom applications and data processing pipelines. The web services are freely available at http://api.unipept.ugent.be and are open sourced under the MIT license. Unipept@ugent.be Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  3. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  4. A flexible geospatial sensor observation service for diverse sensor data based on Web service

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Min, Min

    Achieving a flexible and efficient geospatial Sensor Observation Service (SOS) is difficult, given the diversity of sensor networks, the heterogeneity of sensor data storage, and the differing requirements of users. This paper describes development of a service-oriented multi-purpose SOS framework. The goal is to create a single method of access to the data by integrating the sensor observation service with other Open Geospatial Consortium (OGC) services — Catalogue Service for the Web (CSW), Transactional Web Feature Service (WFS-T) and Transactional Web Coverage Service (WCS-T). The framework includes an extensible sensor data adapter, an OGC-compliant geospatial SOS, a geospatial catalogue service, a WFS-T, and a WCS-T for the SOS, and a geospatial sensor client. The extensible sensor data adapter finds, stores, and manages sensor data from live sensors, sensor models, and simulation systems. Abstract factory design patterns are used during design and implementation. A sensor observation service compatible with the SWE is designed, following the OGC "core" and "transaction" specifications. It is implemented using Java servlet technology. It can be easily deployed in any Java servlet container and automatically exposed for discovery using Web Service Description Language (WSDL). Interaction sequences between a Sensor Web data consumer and an SOS, between a producer and an SOS, and between an SOS and a CSW are described in detail. The framework has been successfully demonstrated in application scenarios for EO-1 observations, weather observations, and water height gauge observations.

  5. Processing biological literature with customizable Web services supporting interoperable formats

    PubMed Central

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225

  6. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    NASA Astrophysics Data System (ADS)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  7. Cloud services for the Fermilab scientific stakeholders

    DOE PAGES

    Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...

    2015-12-23

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  8. Cloud services for the Fermilab scientific stakeholders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timm, S.; Garzoglio, G.; Mhashilkar, P.

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less

  9. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    NASA Astrophysics Data System (ADS)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  10. SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services

    USDA-ARS?s Scientific Manuscript database

    SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...

  11. A resource oriented webs service for environmental modeling

    NASA Astrophysics Data System (ADS)

    Ferencik, Ioan

    2013-04-01

    Environmental modeling is a largely adopted practice in the study of natural phenomena. Environmental models can be difficult to build and use and thus sharing them within the community is an important aspect. The most common approach to share a model is to expose it as a web service. In practice the interaction with this web service is cumbersome due to lack of standardized contract and the complexity of the model being exposed. In this work we investigate the use of a resource oriented approach in exposing environmental models as web services. We view a model as a layered resource build atop the object concept from Object Oriented Programming, augmented with persistence capabilities provided by an embedded object database to keep track of its state and implementing the four basic principles of resource oriented architectures: addressability, statelessness, representation and uniform interface. For implementation we use exclusively open source software: Django framework, dyBase object oriented database and Python programming language. We developed a generic framework of resources structured into a hierarchy of types and consequently extended this typology with recurses specific to the domain of environmental modeling. To test our web service we used cURL, a robust command-line based web client.

  12. A demanding web-based PACS supported by web services technology

    NASA Astrophysics Data System (ADS)

    Costa, Carlos M. A.; Silva, Augusto; Oliveira, José L.; Ribeiro, Vasco G.; Ribeiro, José

    2006-03-01

    During the last years, the ubiquity of web interfaces have pushed practically all PACS suppliers to develop client applications in which clinical practitioners can receive and analyze medical images, using conventional personal computers and Web browsers. However, due to security and performance issues, the utilization of these software packages has been restricted to Intranets. Paradigmatically, one of the most important advantages of digital image systems is to simplify the widespread sharing and remote access of medical data between healthcare institutions. This paper analyses the traditional PACS drawbacks that contribute to their reduced usage in the Internet and describes a PACS based on Web Services technology that supports a customized DICOM encoding syntax and a specific compression scheme providing all historical patient data in a unique Web interface.

  13. Processing biological literature with customizable Web services supporting interoperable formats.

    PubMed

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. © The Author(s) 2014. Published by Oxford University Press.

  14. Data Mining Web Services for Science Data Repositories

    NASA Astrophysics Data System (ADS)

    Graves, S.; Ramachandran, R.; Keiser, K.; Maskey, M.; Lynnes, C.; Pham, L.

    2006-12-01

    The maturation of web services standards and technologies sets the stage for a distributed "Service-Oriented Architecture" (SOA) for NASA's next generation science data processing. This architecture will allow members of the scientific community to create and combine persistent distributed data processing services and make them available to other users over the Internet. NASA has initiated a project to create a suite of specialized data mining web services designed specifically for science data. The project leverages the Algorithm Development and Mining (ADaM) toolkit as its basis. The ADaM toolkit is a robust, mature and freely available science data mining toolkit that is being used by several research organizations and educational institutions worldwide. These mining services will give the scientific community a powerful and versatile data mining capability that can be used to create higher order products such as thematic maps from current and future NASA satellite data records with methods that are not currently available. The package of mining and related services are being developed using Web Services standards so that community-based measurement processing systems can access and interoperate with them. These standards-based services allow users different options for utilizing them, from direct remote invocation by a client application to deployment of a Business Process Execution Language (BPEL) solutions package where a complex data mining workflow is exposed to others as a single service. The ability to deploy and operate these services at a data archive allows the data mining algorithms to be run where the data are stored, a more efficient scenario than moving large amounts of data over the network. This will be demonstrated in a scenario in which a user uses a remote Web-Service-enabled clustering algorithm to create cloud masks from satellite imagery at the Goddard Earth Sciences Data and Information Services Center (GES DISC).

  15. Carbon uptake by mature Amazon forests has mitigated Amazon nations' carbon emissions.

    PubMed

    Phillips, Oliver L; Brienen, Roel J W

    2017-12-01

    Several independent lines of evidence suggest that Amazon forests have provided a significant carbon sink service, and also that the Amazon carbon sink in intact, mature forests may now be threatened as a result of different processes. There has however been no work done to quantify non-land-use-change forest carbon fluxes on a national basis within Amazonia, or to place these national fluxes and their possible changes in the context of the major anthropogenic carbon fluxes in the region. Here we present a first attempt to interpret results from ground-based monitoring of mature forest carbon fluxes in a biogeographically, politically, and temporally differentiated way. Specifically, using results from a large long-term network of forest plots, we estimate the Amazon biomass carbon balance over the last three decades for the different regions and nine nations of Amazonia, and evaluate the magnitude and trajectory of these differentiated balances in relation to major national anthropogenic carbon emissions. The sink of carbon into mature forests has been remarkably geographically ubiquitous across Amazonia, being substantial and persistent in each of the five biogeographic regions within Amazonia. Between 1980 and 2010, it has more than mitigated the fossil fuel emissions of every single national economy, except that of Venezuela. For most nations (Bolivia, Colombia, Ecuador, French Guiana, Guyana, Peru, Suriname) the sink has probably additionally mitigated all anthropogenic carbon emissions due to Amazon deforestation and other land use change. While the sink has weakened in some regions since 2000, our analysis suggests that Amazon nations which are able to conserve large areas of natural and semi-natural landscape still contribute globally-significant carbon sequestration. Mature forests across all of Amazonia have contributed significantly to mitigating climate change for decades. Yet Amazon nations have not directly benefited from providing this global scale

  16. Web-based Altimeter Service

    NASA Astrophysics Data System (ADS)

    Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.

    2010-12-01

    We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The

  17. Library Services through the World Wide Web.

    ERIC Educational Resources Information Center

    Xiao, Daniel; Mosley, Pixey Anne; Cornish, Alan

    1997-01-01

    Provides an overview of the services offered by Texas A&M University's Sterling C. Evans Library via the World Wide Web. Included are public relations, instruction, searching capabilities, enhanced communications, and exhibit options. Future applications of the Web in academic libraries are also addressed. (AEF)

  18. Operational Use of OGC Web Services at the Met Office

    NASA Astrophysics Data System (ADS)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or

  19. SWS: accessing SRS sites contents through Web Services.

    PubMed

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  20. Integrating geo web services for a user driven exploratory analysis

    NASA Astrophysics Data System (ADS)

    Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate

    2016-04-01

    In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.

  1. Web services as applications' integration tool: QikProp case study.

    PubMed

    Laoui, Abdel; Polyakov, Valery R

    2011-07-15

    Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.

  2. Web Services Provide Access to SCEC Scientific Research Application Software

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.

    2003-12-01

    Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the

  3. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the

  4. Introducing the PRIDE Archive RESTful web services.

    PubMed

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Online Information Services. Caught in the Web?

    ERIC Educational Resources Information Center

    Green, Tim

    1995-01-01

    Provides brief reviews of the sites for several online services of the World Wide Web; the Web as a marketing tool and other aspects of interest to information professionals are highlighted. A sidebar presents information on accessing Internet locations, graphics, online forms, Telnet, saving, printing, mailing, and searching. (AEF)

  6. Web 2.0 Strategy in Libraries and Information Services

    ERIC Educational Resources Information Center

    Byrne, Alex

    2008-01-01

    Web 2.0 challenges libraries to change from their predominantly centralised service models with integrated library management systems at the hub. Implementation of Web 2.0 technologies and the accompanying attitudinal shifts will demand reconceptualisation of the nature of library and information service around a dynamic, ever changing, networked,…

  7. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  8. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed

    Halub, L P

    1999-07-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.

  9. Web-based services for drug design and discovery.

    PubMed

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  10. SolarSoft Web Services

    NASA Astrophysics Data System (ADS)

    Freeland, S.; Hurlburt, N.

    2005-12-01

    The SolarSoft system (SSW) is a set of integrated software libraries, databases, and system utilities which provide a common programming and data analysis environment for solar physics. The system includes contributions from a large community base, representing the efforts of many NASA PI team MO&DA teams,spanning many years and multiple NASA and international orbital and ground based missions. The SSW general use libraries include Many hundreds of utilities which are instrument and mission independent. A large subset are also SOLAR independent, such as time conversions, digital detector cleanup, time series analysis, mathematics, image display, WWW server communications and the like. PI teams may draw on these general purpose libraries for analysis and application development while concentrating efforts on instrument specific calibration issues rather than reinvention of general use software. By the same token, PI teams are encouraged to contribute new applications or enhancements to existing utilities which may have more general interest. Recent areas of intense evolution include space weather applications, automated distributed data access and analysis, interfaces with the ongoing Virtual Solar Observatory efforts, and externalization of SolarSoft power through Web Services. We will discuss the current status of SSW web services and demonstrate how this facilitates accessing the underlying power of SolarSoft in more abstract terms. In this context, we will describe the use of SSW services within the Collaborative Sun Earth Connector environment.

  11. A web service for service composition to aid geospatial modelers

    NASA Astrophysics Data System (ADS)

    Bigagli, L.; Santoro, M.; Roncella, R.; Mazzetti, P.

    2012-04-01

    The identification of appropriate mechanisms for process reuse, chaining and composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. In the Earth and Space Sciences, such a facility could primarily enable integrated and interoperable modeling, for what several approaches have been proposed and developed, over the last years. In fact, GEOSS is specifically tasked with the development of the so-called "Model Web". At increasing levels of abstraction and generalization, the initial stove-pipe software tools have evolved to community-wide modeling frameworks, to Component-Based Architecture solution, and, more recently, started to embrace Service-Oriented Architectures technologies, such as the OGC WPS specification and the WS-* stack of W3C standards for service composition. However, so far, the level of abstraction seems too low for implementing the Model Web vision, and far too complex technological aspects must still be addressed by both providers and users, resulting in limited usability and, eventually, difficult uptake. As by the recent ICT trend of resource virtualization, it has been suggested that users in need of a particular processing capability, required by a given modeling workflow, may benefit from outsourcing the composition activities into an external first-class service, according to the Composition as a Service (CaaS) approach. A CaaS system provides the necessary interoperability service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general) in the form of executable workflows. This work introduces the architecture of a CaaS system, as a distributed information system for creating, validating, editing, storing, publishing, and executing geospatial workflows. This way, the users can be freed from the need of a composition infrastructure and

  12. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.

  13. BioCatalogue: a universal catalogue of web services for the life sciences.

    PubMed

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A

    2010-07-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.

  14. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  15. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    PubMed

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  16. Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service

    PubMed Central

    Hatano, Kenji; Ohe, Kazuhiko

    2003-01-01

    Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364

  17. WebGIS based community services architecture by griddization managements and crowdsourcing services

    NASA Astrophysics Data System (ADS)

    Wang, Haiyin; Wan, Jianhua; Zeng, Zhe; Zhou, Shengchuan

    2016-11-01

    Along with the fast economic development of cities, rapid urbanization, population surge, in China, the social community service mechanisms need to be rationalized and the policy standards need to be unified, which results in various types of conflicts and challenges for community services of government. Based on the WebGIS technology, the article provides a community service architecture by gridding management and crowdsourcing service. The WEBGIS service architecture includes two parts: the cloud part and the mobile part. The cloud part refers to community service centres, which can instantaneously response the emergency, visualize the scene of the emergency, and analyse the data from the emergency. The mobile part refers to the mobile terminal, which can call the centre, report the event, collect data and verify the feedback. This WebGIS based community service systems for Huangdao District of Qingdao, were awarded the “2015’ national innovation of social governance case of typical cases”.

  18. A Privacy Access Control Framework for Web Services Collaboration with Role Mechanisms

    NASA Astrophysics Data System (ADS)

    Liu, Linyuan; Huang, Zhiqiu; Zhu, Haibin

    With the popularity of Internet technology, web services are becoming the most promising paradigm for distributed computing. This increased use of web services has meant that more and more personal information of consumers is being shared with web service providers, leading to the need to guarantee the privacy of consumers. This paper proposes a role-based privacy access control framework for Web services collaboration, it utilizes roles to specify the privacy privileges of services, and considers the impact on the reputation degree of the historic experience of services in playing roles. Comparing to the traditional privacy access control approaches, this framework can make the fine-grained authorization decision, thus efficiently protecting consumers' privacy.

  19. The value of Web-based library services at Cedars-Sinai Health System.

    PubMed Central

    Halub, L P

    1999-01-01

    Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services. PMID:10427423

  20. Security and Dependability Solutions for Web Services and Workflows

    NASA Astrophysics Data System (ADS)

    Kokolakis, Spyros; Rizomiliotis, Panagiotis; Benameur, Azzedine; Sinha, Smriti Kumar

    In this chapter we present an innovative approach towards the design and application of Security and Dependability (S&D) solutions for Web services and service-based workflows. Recently, several standards have been published that prescribe S&D solutions for Web services, e.g. OASIS WS-Security. However,the application of these solutions in specific contexts has been proven problematic. We propose a new framework for the application of such solutions based on the SERENITY S&D Pattern concept. An S&D Pattern comprises all the necessary information for the implementation, verification, deployment, and active monitoring of an S&D Solution. Thus, system developers may rely on proven solutions that are dynamically deployed and monitored by the Serenity Runtime Framework. Finally, we further extend this approach to cover the case of executable workflows which are realised through the orchestration of Web services.

  1. Implementation of Sensor Twitter Feed Web Service Server and Client

    DTIC Science & Technology

    2016-12-01

    ARL-TN-0807 ● DEC 2016 US Army Research Laboratory Implementation of Sensor Twitter Feed Web Service Server and Client by...Implementation of Sensor Twitter Feed Web Service Server and Client by Bhagyashree V Kulkarni University of Maryland Michael H Lee Computational...

  2. Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

    PubMed Central

    Crump, Matthew J. C.; McDonnell, John V.; Gureckis, Todd M.

    2013-01-01

    Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers. PMID:23516406

  3. Next generation of weather generators on web service framework

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.

    2016-12-01

    Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.

  4. ERDDAP - RESTful Web Services

    Science.gov Websites

    , graphs, or information about datasets). A RESTful web service (external link) - a URL that computer to get the same information in a more computer-program-friendly format like JSON (external link .jsonlKVP, where column names are on every row): Each column has a column name and one type of information

  5. Finding, Browsing and Getting Data Easily Using SPDF Web Services

    NASA Technical Reports Server (NTRS)

    Candey, R.; Chimiak, R.; Harris, B.; Johnson, R.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; McGuire, R.; Papitashvili, N.; hide

    2010-01-01

    The NASA GSFC Space Physics Data Facility (5PDF) provides heliophysics science-enabling information services for enhancing scientific research and enabling integration of these services into the Heliophysics Data Environment paradigm, via standards-based approach (SOAP) and Representational State Transfer (REST) web services in addition to web browser, FTP, and OPeNDAP interfaces. We describe these interfaces and the philosophies behind these web services, and show how to call them from various languages, such as IDL and Perl. We are working towards a "one simple line to call" philosophy extolled in the recent VxO discussions. Combining data from many instruments and missions enables broad research analysis and correlation and coordination with other experiments and missions.

  6. Research on the development and preliminary application of Beijing agricultural sci-tech service hotline WebApp in agricultural consulting services

    NASA Astrophysics Data System (ADS)

    Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong

    2017-09-01

    To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.

  7. BioCatalogue: a universal catalogue of web services for the life sciences

    PubMed Central

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.

    2010-01-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378

  8. Adopting and adapting a commercial view of web services for the Navy

    NASA Astrophysics Data System (ADS)

    Warner, Elizabeth; Ladner, Roy; Katikaneni, Uday; Petry, Fred

    2005-05-01

    Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others. We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.

  9. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    PubMed

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  10. Web Services and Other Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.

    2012-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These

  11. QoS measurement of workflow-based web service compositions using Colored Petri net.

    PubMed

    Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra

    2014-01-01

    Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.

  12. Customer Decision Making in Web Services with an Integrated P6 Model

    NASA Astrophysics Data System (ADS)

    Sun, Zhaohao; Sun, Junqing; Meredith, Grant

    Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.

  13. An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services

    NASA Astrophysics Data System (ADS)

    Shah, M.; Verma, Y.; Nandakumar, R.

    2012-07-01

    Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.

  14. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  15. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  16. General Practitioners' Attitudes Toward a Web-Based Mental Health Service for Adolescents: Implications for Service Design and Delivery.

    PubMed

    Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne

    2018-03-23

    Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic

  17. Web quality control for lectures: Supercourse and Amazon.com.

    PubMed

    Linkov, Faina; LaPorte, Ronald; Lovalekar, Mita; Dodani, Sunita

    2005-12-01

    Peer review has been at the corner stone of quality control of the biomedical journals in the past 300 years. With the emergency of the Internet, new models of quality control and peer review are emerging. However, such models are poorly investigated. We would argue that the popular system of quality control used in Amazon.com offers a way to ensure continuous quality improvement in the area of research communications on the Internet. Such system is providing an interesting alternative to the traditional peer review approaches used in the biomedical journals and challenges the traditional paradigms of scientific publishing. This idea is being explored in the context of Supercourse, a library of 2,350 prevention lectures, shared for free by faculty members from over 150 countries. Supercourse is successfully utilizing quality control approaches that are similar to Amazon.com model. Clearly, the existing approaches and emerging alternatives for quality control in scientific communications needs to be assessed scientifically. Rapid explosion of internet technologies could be leveraged to produce better, more cost effective systems for quality control in the biomedical publications and across all sciences.

  18. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  19. Scalable web services for the PSIPRED Protein Analysis Workbench.

    PubMed

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  20. Going, Going, Still There: Using the WebCite Service to Permanently Archive Cited Web Pages

    PubMed Central

    Trudel, Mathieu

    2005-01-01

    Scholars are increasingly citing electronic “web references” which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To “webcite” a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its “instructions for authors” accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) “prospectively” before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted “citing articles” (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have

  1. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    PubMed

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  2. REMORA: a pilot in the ocean of BioMoby web-services.

    PubMed

    Carrere, Sébastien; Gouzy, Jérôme

    2006-04-01

    Emerging web-services technology allows interoperability between multiple distributed architectures. Here, we present REMORA, a web server implemented according to the BioMoby web-service specifications, providing life science researchers with an easy-to-use workflow generator and launcher, a repository of predefined workflows and a survey system. Jerome.Gouzy@toulouse.inra.fr The REMORA web server is freely available at http://bioinfo.genopole-toulouse.prd.fr/remora, sources are available upon request from the authors.

  3. Establishing Transportation Framework Services Using the Open Geospatial Consortium Web Feature Service Specification

    NASA Astrophysics Data System (ADS)

    Yang, C.; Wong, D. W.; Phillips, T.; Wright, R. A.; Lindsey, S.; Kafatos, M.

    2005-12-01

    As a teamed partnership of the Center for Earth Observing and Space Research (CEOSR) at George Mason University (GMU), Virginia Department of Transportation (VDOT), Bureau of Transportation Statistics at the Department of Transportation (BTS/DOT), and Intergraph, we established Transportation Framework Data Services using Open Geospatial Consortium (OGC)'s Web Feature Service (WFS) Specification to enable the sharing of transportation data among the federal level with data from BTS/DOT, the state level through VDOT, the industries through Intergraph. CEOSR develops WFS solutions using Intergraph software. Relevant technical documents are also developed and disseminated through the partners. The WFS is integrated with operational geospatial systems at CEOSR and VDOT. CEOSR works with Intergraph on developing WFS solutions and technical documents. GeoMedia WebMap WFS toolkit is used with software and technical support from Intergraph. ESRI ArcIMS WFS connector is used with GMU's campus license of ESRI products. Tested solutions are integrated with framework data service operational systems, including 1) CEOSR's interoperable geospatial information services, FGDC clearinghouse Node, Geospatial One Stop (GOS) portal, and WMS services, 2) VDOT's state transportation data and GIS infrastructure, and 3)BTS/DOT's national transportation data. The project presents: 1) develop and deploy an operational OGC WFS 1.1 interfaces at CEOSR for registering with FGDC/GOS Portal and responding to Web ``POST'' requests for transportation Framework data as listed in Table 1; 2) build the WFS service that can return the data that conform to the drafted ANSI/INCITS L1 Standard (when available) for each identified theme in the format given by OGC Geography Markup Language (GML) Version 3.0 or higher; 3) integrate the OGC WFS with CEOSR's clearinghouse nodes, 4) establish a formal partnership to develop and share WFS-based geospatial interoperability technology among GMU, VDOT, BTS

  4. Web-services-based spatial decision support system to facilitate nuclear waste siting

    NASA Astrophysics Data System (ADS)

    Huang, L. Xinglai; Sheng, Grant

    2006-10-01

    The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.

  5. Reinforcement Learning Based Web Service Compositions for Mobile Business

    NASA Astrophysics Data System (ADS)

    Zhou, Juan; Chen, Shouming

    In this paper, we propose a new solution to Reactive Web Service Composition, via molding with Reinforcement Learning, and introducing modified (alterable) QoS variables into the model as elements in the Markov Decision Process tuple. Moreover, we give an example of Reactive-WSC-based mobile banking, to demonstrate the intrinsic capability of the solution in question of obtaining the optimized service composition, characterized by (alterable) target QoS variable sets with optimized values. Consequently, we come to the conclusion that the solution has decent potentials in boosting customer experiences and qualities of services in Web Services, and those in applications in the whole electronic commerce and business sector.

  6. A SOAP Web Service for accessing MODIS land product subsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SanthanaVannan, Suresh K; Cook, Robert B; Pan, Jerry Yun

    2011-01-01

    Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. Tomore » overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.« less

  7. Real-time GIS data model and sensor web service platform for environmental data management.

    PubMed

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  8. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  9. Business Systems Branch Abilities, Capabilities, and Services Web Page

    NASA Technical Reports Server (NTRS)

    Cortes-Pena, Aida Yoguely

    2009-01-01

    During the INSPIRE summer internship I acted as the Business Systems Branch Capability Owner for the Kennedy Web-based Initiative for Communicating Capabilities System (KWICC), with the responsibility of creating a portal that describes the services provided by this Branch. This project will help others achieve a clear view ofthe services that the Business System Branch provides to NASA and the Kennedy Space Center. After collecting the data through the interviews with subject matter experts and the literature in Business World and other web sites I identified discrepancies, made the necessary corrections to the sites and placed the information from the report into the KWICC web page.

  10. Available, intuitive and free! Building e-learning modules using web 2.0 services.

    PubMed

    Tam, Chun Wah Michael; Eastwood, Anne

    2012-01-01

    E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.

  11. Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems

    NASA Astrophysics Data System (ADS)

    Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn

    The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.

  12. Web processing service for landslide hazard assessment

    NASA Astrophysics Data System (ADS)

    Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.

    2012-04-01

    Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive

  13. High-performance web services for querying gene and variant annotation.

    PubMed

    Xin, Jiwen; Mark, Adam; Afrasiabi, Cyrus; Tsueng, Ginger; Juchler, Moritz; Gopal, Nikhil; Stupp, Gregory S; Putman, Timothy E; Ainscough, Benjamin J; Griffith, Obi L; Torkamani, Ali; Whetzel, Patricia L; Mungall, Christopher J; Mooney, Sean D; Su, Andrew I; Wu, Chunlei

    2016-05-06

    Efficient tools for data management and integration are essential for many aspects of high-throughput biology. In particular, annotations of genes and human genetic variants are commonly used but highly fragmented across many resources. Here, we describe MyGene.info and MyVariant.info, high-performance web services for querying gene and variant annotation information. These web services are currently accessed more than three million times permonth. They also demonstrate a generalizable cloud-based model for organizing and querying biological annotation information. MyGene.info and MyVariant.info are provided as high-performance web services, accessible at http://mygene.info and http://myvariant.info . Both are offered free of charge to the research community.

  14. Web Service Architecture Framework for Embedded Devices

    ERIC Educational Resources Information Center

    Yanzick, Paul David

    2009-01-01

    The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…

  15. Bioinformatics data distribution and integration via Web Services and XML.

    PubMed

    Li, Xiao; Zhang, Yizheng

    2003-11-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  16. Pathosphere.org: Pathogen Detection and Characterization Through a Web-based, Open-source Informatics Platform

    DTIC Science & Technology

    2015-12-29

    human), Homo sapiens chromosome (human), Mus_musculus ( rodent ), Sus scrofa (pig), mitochondrion genome, and Xenopus laevis (frog) . The taxonomy... Amazon Web Services. PLoS Comput Biol 2011, 7:e1002147. 10. Briese T, Paweska JT, McMullan LK, Hutchison SK, Street C, Palacios G, Khristova ML...human enterovirus C genotypes found in respiratory samples from Peru . J Gen Virol 2013, 94(Pt 1):120–7. 54. Jacob ST, Crozier I, Schieffelin JS

  17. SOAP based web services and their future role in VO projects

    NASA Astrophysics Data System (ADS)

    Topf, F.; Jacquey, C.; Génot, V.; Cecconi, B.; André, N.; Zhang, T. L.; Kallio, E.; Lammer, H.; Facsko, G.; Stöckler, R.; Khodachenko, M.

    2011-10-01

    Modern state-of-the-art web services are from crucial importance for the interoperability of different VO tools existing in the planetary community. SOAP based web services assure the interconnectability between different data sources and tools by providing a common protocol for communication. This paper will point out a best practice approach with the Automated Multi-Dataset Analysis Tool (AMDA) developed by CDPP, Toulouse and the provision of VEX/MAG data from a remote database located at IWF, Graz. Furthermore a new FP7 project IMPEx will be introduced with a potential usage example of AMDA web services in conjunction with simulation models.

  18. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  19. A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don

    2011-01-01

    A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.

  20. Fragmentation of Andes-to-Amazon connectivity by hydropower dams

    PubMed Central

    Anderson, Elizabeth P.; Jenkins, Clinton N.; Heilpern, Sebastian; Maldonado-Ocampo, Javier A.; Carvajal-Vallejos, Fernando M.; Encalada, Andrea C.; Rivadeneira, Juan Francisco; Hidalgo, Max; Cañas, Carlos M.; Ortega, Hernan; Salcedo, Norma; Maldonado, Mabel; Tedesco, Pablo A.

    2018-01-01

    Andes-to-Amazon river connectivity controls numerous natural and human systems in the greater Amazon. However, it is being rapidly altered by a wave of new hydropower development, the impacts of which have been previously underestimated. We document 142 dams existing or under construction and 160 proposed dams for rivers draining the Andean headwaters of the Amazon. Existing dams have fragmented the tributary networks of six of eight major Andean Amazon river basins. Proposed dams could result in significant losses in river connectivity in river mainstems of five of eight major systems—the Napo, Marañón, Ucayali, Beni, and Mamoré. With a newly reported 671 freshwater fish species inhabiting the Andean headwaters of the Amazon (>500 m), dams threaten previously unrecognized biodiversity, particularly among endemic and migratory species. Because Andean rivers contribute most of the sediment in the mainstem Amazon, losses in river connectivity translate to drastic alteration of river channel and floodplain geomorphology and associated ecosystem services. PMID:29399629

  1. Fragmentation of Andes-to-Amazon connectivity by hydropower dams.

    PubMed

    Anderson, Elizabeth P; Jenkins, Clinton N; Heilpern, Sebastian; Maldonado-Ocampo, Javier A; Carvajal-Vallejos, Fernando M; Encalada, Andrea C; Rivadeneira, Juan Francisco; Hidalgo, Max; Cañas, Carlos M; Ortega, Hernan; Salcedo, Norma; Maldonado, Mabel; Tedesco, Pablo A

    2018-01-01

    Andes-to-Amazon river connectivity controls numerous natural and human systems in the greater Amazon. However, it is being rapidly altered by a wave of new hydropower development, the impacts of which have been previously underestimated. We document 142 dams existing or under construction and 160 proposed dams for rivers draining the Andean headwaters of the Amazon. Existing dams have fragmented the tributary networks of six of eight major Andean Amazon river basins. Proposed dams could result in significant losses in river connectivity in river mainstems of five of eight major systems-the Napo, Marañón, Ucayali, Beni, and Mamoré. With a newly reported 671 freshwater fish species inhabiting the Andean headwaters of the Amazon (>500 m), dams threaten previously unrecognized biodiversity, particularly among endemic and migratory species. Because Andean rivers contribute most of the sediment in the mainstem Amazon, losses in river connectivity translate to drastic alteration of river channel and floodplain geomorphology and associated ecosystem services.

  2. Research of three level match method about semantic web service based on ontology

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Fang

    2011-10-01

    An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.

  3. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  4. CMR Catalog Service for the Web

    NASA Technical Reports Server (NTRS)

    Newman, Doug; Mitchell, Andrew

    2016-01-01

    With the impending retirement of Global Change Master Directory (GCMD) Application Programming Interfaces (APIs) the Common Metadata Repository (CMR) was charged with providing a collection-level Catalog Service for the Web (CSW) that provided the same level of functionality as GCMD. This talk describes the capabilities of the CMR CSW API with particular reference to the support of the Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS) Integrated Catalog (CWIC).

  5. Web Services and Data Enhancements at the Northern California Earthquake Data Center

    NASA Astrophysics Data System (ADS)

    Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.

    2013-12-01

    The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest

  6. Providing web-based mental health services to at-risk women.

    PubMed

    Lipman, Ellen L; Kenny, Meghan; Marziali, Elsa

    2011-08-19

    We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative evaluations did not show

  7. A Security Architecture for Grid-enabling OGC Web Services

    NASA Astrophysics Data System (ADS)

    Angelini, Valerio; Petronzio, Luca

    2010-05-01

    In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid

  8. Pharmaceutical services for endemic situations in the Brazilian Amazon: organization of services and prescribing practices for Plasmodium vivax and Plasmodium falciparum non-complicated malaria in high-risk municipalities

    PubMed Central

    2011-01-01

    Background In spite of the fact that pharmaceutical services are an essential component of all malaria programmes, quality of these services has been little explored in the literature. This study presents the first results of the application of an evaluation model of pharmaceutical services in high-risk municipalities of the Amazon region, focusing on indicators regarding organization of services and prescribing according to national guidelines. Methods A theoretical framework of pharmaceutical services for non-complicated malaria was built based on the Rapid Evaluation Method (WHO). The framework included organization of services and prescribing, among other activities. The study was carried out in 15 primary health facilities in six high-risk municipalities of the Brazilian Amazon. Malaria individuals ≥ 15 years old were approached and data was collected using specific instruments. Data was checked by independent reviewers and fed to a data bank through double-entry. Descriptive variables were analyzed. Results A copy of the official treatment guideline was found in 80% of the facilities; 67% presented an environment for receiving and prescribing patients. Re-supply of stocks followed a different timeline; no facilities adhered to forecasting methods for stock management. No shortages or expired anti-malarials were observed, but overstock was a common finding. On 86.7% of facilities, the average of good storage practices was 48%. Time between diagnosis and treatment was zero days. Of 601 patients interviewed, 453 were diagnosed for Plasmodium vivax; of these, 99.3% received indications for the first-line scheme. Different therapeutic schemes were given to Plasmodium falciparum patients. Twenty-eight (4.6%) out of 601 were prescribed regimens not listed in the national guideline. Only 5.7% individuals received a prescription or a written instruction of any kind. Conclusions The results show that while diagnostic procedure is well established and functioning in

  9. Quality and Business Offer Driven Selection of Web Services for Compositions

    NASA Astrophysics Data System (ADS)

    D'Mello, Demian Antony; Ananthanarayana, V. S.

    The service composition makes use of the existing services to produce a new value added service to execute the complex business process. The service discovery finds the suitable services (candidates) for the various tasks of the composition based on the functionality. The service selection in composition assigns the best candidate for each tasks of the pre-structured composition plan based on the non-functional properties. In this paper, we propose the broker based architecture for the QoS and business offer aware Web service compositions. The broker architecture facilitates the registration of a new composite service into three different registries. The broker publishes service information into the service registry and QoS into the QoS registry. The business offers of the composite Web service are published into a separate repository called business offer (BO) registry. The broker employs the mechanism for the optimal assignment of the Web services to the individual tasks of the composition. The assignment is based on the composite service providers’s (CSP) variety of requirements defined on the QoS and business offers. The broker also computes the QoS of resulting composition and provides the useful information for the CSP to publish thier business offers.

  10. Automatically exposing OpenLifeData via SADI semantic Web Services.

    PubMed

    González, Alejandro Rodríguez; Callahan, Alison; Cruz-Toledo, José; Garcia, Adrian; Egaña Aranguren, Mikel; Dumontier, Michel; Wilkinson, Mark D

    2014-01-01

    Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.

  11. What We Can Learn from Amazon for Clinical Decision Support Systems.

    PubMed

    Abid, Sidra; Keshavjee, Karim; Karim, Arsalan; Guergachi, Aziz

    2017-01-01

    Health care continue to lag behind other industries, such as retail and financial services, in the use of decision-support-like tools. Amazon is particularly prolific in the use of advanced predictive and prescriptive analytics to assist its customers to purchase more, while increasing satisfaction, retention, repeat-purchases and loyalty. How can we do the same in health care? In this paper, we explore various elements of the Amazon website and Amazon's data science and big data practices to gather inspiration for re-designing clinical decision support in the health care sector. For each Amazon element we identified, we present one or more clinical applications to help us better understand where Amazon's.

  12. SBMLmod: a Python-based web application and web service for efficient data integration and model simulation.

    PubMed

    Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines

    2017-06-24

    Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.

  13. The Business Information Services: Old-Line Online Moves to the Web.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1997-01-01

    Although the availability of free information on the World Wide Web has placed traditional, fee-based proprietary online services on the defensive, most major online business services are now on the Web. Highlights several business information providers: Profound, NewsNet and ProQuest Direct, Dow Jones and Wall Street Journal Interactive Edition,…

  14. A new reference implementation of the PSICQUIC web service.

    PubMed

    del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Jimenez, Rafael C; Galeota, Eugenia; Launay, Guillaume; Goll, Johannes; Breuer, Karin; Ono, Keiichiro; Salwinski, Lukasz; Hermjakob, Henning

    2013-07-01

    The Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) specification was created by the Human Proteome Organization Proteomics Standards Initiative (HUPO-PSI) to enable computational access to molecular-interaction data resources by means of a standard Web Service and query language. Currently providing >150 million binary interaction evidences from 28 servers globally, the PSICQUIC interface allows the concurrent search of multiple molecular-interaction information resources using a single query. Here, we present an extension of the PSICQUIC specification (version 1.3), which has been released to be compliant with the enhanced standards in molecular interactions. The new release also includes a new reference implementation of the PSICQUIC server available to the data providers. It offers augmented web service capabilities and improves the user experience. PSICQUIC has been running for almost 5 years, with a user base growing from only 4 data providers to 28 (April 2013) allowing access to 151 310 109 binary interactions. The power of this web service is shown in PSICQUIC View web application, an example of how to simultaneously query, browse and download results from the different PSICQUIC servers. This application is free and open to all users with no login requirement (http://www.ebi.ac.uk/Tools/webservices/psicquic/view/main.xhtml).

  15. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the

  16. Extravagance in the commons: Resource exploitation and the frontiers of ecosystem service depletion in the Amazon estuary.

    PubMed

    de Araujo Barbosa, Caio C; Atkinson, Peter M; Dearing, John A

    2016-04-15

    Estuaries hold major economic potential due their strategic location, close to seas and inland waterways, thereby supporting intense economic activity. The increasing pace of human development in coastal deltas over the past five decades has also strained local resources and produced extensive changes across both social and ecological systems. The Amazon estuary is located in the Amazon Basin, North Brazil, the largest river basin on Earth and also one of the least understood. A considerable segment of the population living in the estuary is directly dependent on the local extraction of natural resources for their livelihood. Areas sparsely inhabited may be exploited with few negative consequences for the environment. However, recent and increasing pressure on ecosystem services is maximised by a combination of factors such as governance, currency exchange rates, exports of beef and forest products. Here we present a cross methodological approach in identifying the political frontiers of forest cover change in the estuary with consequences for ecosystem services loss. We used a combination of data from earth observation satellites, ecosystem service literature, and official government statistics to produce spatially-explicit relationships linking the Green Vegetation Cover to the availability of ecosystems provided by forests in the estuary. Our results show that the continuous changes in land use/cover and in the economic state have contributed significantly to changes in key ecosystem services, such as carbon sequestration, climate regulation, and the availability of timber over the last thirty years. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Design and implementation of CUAHSI WaterML and WaterOneFlow Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Zaslavsky, I.; Whitenack, T.; Maidment, D.

    2007-12-01

    WaterOneFlow is a term for a group of web services created by and for the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) community. CUAHSI web services facilitate the retrieval of hydrologic observations information from online data sources using the SOAP protocol. CUAHSI Water Markup Language (below referred to as WaterML) is an XML schema defining the format of messages returned by the WaterOneFlow web services. \

  18. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  19. Design of an Integrated Web Services Brokering System

    DTIC Science & Technology

    2009-01-01

    new Web service is corralled by the IWB, its service description is broken into lexemes and matched to terms in the ontology. The ontology is manually...such data for the atmosphere and ocean. NOAA, in particular, provides a wide range of data including weather information, ocean data on reefs , tides

  20. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have

  1. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems

  2. Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.

    PubMed

    Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell

    2011-07-26

    Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.

  3. The EarthServer Geology Service: web coverage services for geosciences

    NASA Astrophysics Data System (ADS)

    Laxton, John; Sen, Marcus; Passmore, James

    2014-05-01

    The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed

  4. Providing web-based mental health services to at-risk women

    PubMed Central

    2011-01-01

    Background We examined the feasibility of providing web-based mental health services, including synchronous internet video conferencing of an evidence-based support/education group, to at-risk women, specifically poor lone mothers. The objectives of this study were to: (i) adapt a face-to-face support/education group intervention to a web-based format for lone mothers, and (ii) evaluate lone mothers' response to web-based services, including an online video conferencing group intervention program. Methods Participating mothers were recruited through advertisements. To adapt the face-to-face intervention to a web-based format, we evaluated participant motivation through focus group/key informant interviews (n = 7), adapted the intervention training manual for a web-based environment and provided a computer training manual. To evaluate response to web-based services, we provided the intervention to two groups of lone mothers (n = 15). Pre-post quantitative evaluation of mood, self-esteem, social support and parenting was done. Post intervention follow up interviews explored responses to the group and to using technology to access a health service. Participants received $20 per occasion of data collection. Interviews were taped, transcribed and content analysis was used to code and interpret the data. Adherence to the intervention protocol was evaluated. Results Mothers participating in this project experienced multiple difficulties, including financial and mood problems. We adapted the intervention training manual for use in a web-based group environment and ensured adherence to the intervention protocol based on viewing videoconferencing group sessions and discussion with the leaders. Participant responses to the group intervention included decreased isolation, and increased knowledge and confidence in themselves and their parenting; the responses closely matched those of mothers who obtained same service in face-to-face groups. Pre-and post-group quantitative

  5. Advancing Your Library's Web-Based Services. ERIC Digest.

    ERIC Educational Resources Information Center

    Feldman, Sari; Strobel, Tracy

    This digest discusses the development of World Wide Web-based services for libraries and provides examples from the Cleveland Public Library (CPL). The first section highlights the importance of developing such services, steps to be followed for a successful project, and the importance of having the goal of replicating and enhancing traditional…

  6. Towards Web Service-Based Educational Systems

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.

    2005-01-01

    The need for designing the next generation of web service-based educational systems with the ability of integrating components from different tools and platforms is now recognised as the major challenge in advanced learning technologies. In this paper, we discuss this issue and we present the conceptual design of such environment, referred to as…

  7. The Use of RESTful Web Services in Medical Informatics and Clinical Research and Its Implementation in Europe.

    PubMed

    Aerts, Jozef

    2017-01-01

    RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.

  8. Determinants of Corporate Web Services Adoption: A Survey of Companies in Korea

    ERIC Educational Resources Information Center

    Kim, Daekil

    2010-01-01

    Despite the growing interest and attention from Information Technology researchers and practitioners, empirical research on factors that influence an organization's likelihood of adoption of Web Services has been limited. This study identified the factors influencing Web Services adoption from the perspective of 151 South Korean firms. The…

  9. US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access

    NASA Astrophysics Data System (ADS)

    Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.

    2012-04-01

    The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.

  10. 78 FR 60303 - Agency Information Collection Activities: Online Survey of Web Services Employers; New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Web site at http://www.Regulations.gov under e-Docket ID number USCIS-2013- 0003. When submitting... information collection. (2) Title of the Form/Collection: Online Survey of Web Services Employers. (3) Agency...

  11. 78 FR 42537 - Agency Information Collection Activities: Online Survey of Web Services Employers; New...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Information Collection: New information collection. (2) Title of the Form/Collection: Online Survey of Web... sector. It is necessary that USCIS obtains data on the E-Verify Program Web Services. Gaining an...

  12. Web Map Services (WMS) Global Mosaic

    NASA Technical Reports Server (NTRS)

    Percivall, George; Plesea, Lucian

    2003-01-01

    The WMS Global Mosaic provides access to imagery of the global landmass using an open standard for web mapping. The seamless image is a mosaic of Landsat 7 scenes; geographically-accurate with 30 and 15 meter resolutions. By using the OpenGIS Web Map Service (WMS) interface, any organization can use the global mosaic as a layer in their geospatial applications. Based on a trade study, an implementation approach was chosen that extends a previously developed WMS hosting a Landsat 5 CONUS mosaic developed by JPL. The WMS Global Mosaic supports the NASA Geospatial Interoperability Office goal of providing an integrated digital representation of the Earth, widely accessible for humanity's critical decisions.

  13. Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.

    PubMed

    Aranguren, Mikel Egaña; Wilkinson, Mark D

    2015-01-01

    Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.

  14. Geovisualization in the HydroProg web map service

    NASA Astrophysics Data System (ADS)

    Spallek, Waldemar; Wieczorek, Malgorzata; Szymanowski, Mariusz; Niedzielski, Tomasz; Swierczynska, Malgorzata

    2016-04-01

    The HydroProg system, built at the University of Wroclaw (Poland) in frame of the research project no. 2011/01/D/ST10/04171 financed by the National Science Centre of Poland, has been designed for computing predictions of river stages in real time on a basis of multimodelling. This experimental system works on the upper Nysa Klodzka basin (SW Poland) above the gauge in the town of Bardo, with the catchment area of 1744 square kilometres. The system operates in association with the Local System for Flood Monitoring of Klodzko County (LSOP), and produces hydrograph prognoses as well as inundation predictions. For presenting the up-to-date predictions and their statistics in the online mode, the dedicated real-time web map service has been designed. Geovisualisation in the HydroProg map service concerns: interactive maps of study area, interactive spaghetti hydrograms of water level forecasts along with observed river stages, animated images of inundation. The LSOP network offers a high spatial and temporal resolution of observations, as the length of the sampling interval is equal to 15 minutes. The main environmental elements related to hydrological modelling are shown on the main map. This includes elevation data (hillshading and hypsometric tints), rivers and reservoirs as well as catchment boundaries. Furthermore, we added main towns, roads as well as political and administrative boundaries for better map understanding. The web map was designed as a multi-scale representation, with levels of detail and zooming according to scales: 1:100 000, 1:250 000 and 1:500 000. Observations of water level in LSOP are shown on interactive hydrographs for each gauge. Additionally, predictions and some of their statistical characteristics (like prediction errors and Nash-Sutcliffe efficiency) are shown for selected gauges. Finally, predictions of inundation are presented on animated maps which have been added for four experimental sites. The HydroProg system is a strictly

  15. Developing Web Services for Technology Education. The Graphic Communication Electronic Publishing Project.

    ERIC Educational Resources Information Center

    Sanders, Mark

    1999-01-01

    Graphic Communication Electronic Publishing Project supports a Web site (http://TechEd.vt.edu/gcc/) for graphic communication teachers and students, providing links to Web materials, conversion of print materials to electronic formats, and electronic products and services including job listings, resume posting service, and a listserv. (SK)

  16. Cloud Surprises in Moving NASA EOSDIS Applications into Amazon Web Services

    NASA Technical Reports Server (NTRS)

    Mclaughlin, Brett

    2017-01-01

    NASA ESDIS has been moving a variety of data ingest, distribution, and science data processing applications into a cloud environment over the last 2 years. As expected, there have been a number of challenges in migrating primarily on-premises applications into a cloud-based environment, related to architecture and taking advantage of cloud-based services. What was not expected is a number of issues that were beyond purely technical application re-architectures. We ran into surprising network policy limitations, billing challenges in a government-based cost model, and difficulty in obtaining certificates in an NASA security-compliant manner. On the other hand, this approach has allowed us to move a number of applications from local hosting to the cloud in a matter of hours (yes, hours!!), and our CMR application now services 95% of granule searches and an astonishing 99% of all collection searches in under a second. And most surprising of all, well, you'll just have to wait and see the realization that caught our entire team off guard!

  17. Maximizing Amazonia's Ecosystem Services: Juggling the potential for carbon storage, agricultural yield and biodiversity in the Amazon

    NASA Astrophysics Data System (ADS)

    O'Connell, C. S.; Foley, J. A.; Gerber, J. S.; Polasky, S.

    2011-12-01

    The Amazon is not only an exceptionally biodiverse and carbon-rich tract of tropical forest, it is also a case study in land use change. Over the next forty years it will continue to experience pressure from an urbanizing and increasingly affluent populace: under a business-as-usual scenario, global cropland, pasture and biofuels systems will carry on expanding, while the Amazon's carbon storage potential will likely become another viable revenue source under REDD+. Balancing those competing land use pressures ought also take into account Amazonia's high - but heterogeneous - biodiversity. Knowing where Amazonia has opportunities to make efficient or optimal trade offs between carbon storage, agricultural production and biodiversity can allow policymakers to direct or influence LUC drivers. This analysis uses a spatially-explicit model that takes climate and management into account to quantify the potential agricultural yield of both the Amazon's most important agricultural commodities - sugar, soy and maize - as well as several that are going to come into increasing prominence, including palm oil. In addition, it maps the potential for carbon to be stored in forest biomass and relative species richness across Amazonia. We then compare carbon storage, agricultural yield and species richness and identify areas where efficient trade offs occur between food, carbon, and biodiversity - three critical ecosystem goods and services provided by the world's largest tropical forest.

  18. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2013-12-01

    The latest Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with newly available global observations. The traditional approach to climate model evaluation, which compares a single parameter at a time, identifies symptomatic model biases and errors but fails to diagnose the model problems. The model diagnosis process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. To address these challenges, we are developing a parallel, distributed web-service system that enables the physics-based multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation and (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, and (4) the calculation of difference between two variables. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use

  19. AMBIT RESTful web services: an implementation of the OpenTox application programming interface.

    PubMed

    Jeliazkova, Nina; Jeliazkov, Vedrin

    2011-05-16

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application

  20. AMBIT RESTful web services: an implementation of the OpenTox application programming interface

    PubMed Central

    2011-01-01

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application

  1. A database for the monitoring of thermal anomalies over the Amazon forest and adjacent intertropical oceans

    PubMed Central

    Jiménez-Muñoz, Juan C.; Mattar, Cristian; Sobrino, José A.; Malhi, Yadvinder

    2015-01-01

    Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download. PMID:26029379

  2. A database for the monitoring of thermal anomalies over the Amazon forest and adjacent intertropical oceans.

    PubMed

    Jiménez-Muñoz, Juan C; Mattar, Cristian; Sobrino, José A; Malhi, Yadvinder

    2015-01-01

    Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download.

  3. Applying Semantic Web Services and Wireless Sensor Networks for System Integration

    NASA Astrophysics Data System (ADS)

    Berkenbrock, Gian Ricardo; Hirata, Celso Massaki; de Oliveira Júnior, Frederico Guilherme Álvares; de Oliveira, José Maria Parente

    In environments like factories, buildings, and homes automation services tend to often change during their lifetime. Changes are concerned to business rules, process optimization, cost reduction, and so on. It is important to provide a smooth and straightforward way to deal with these changes so that could be handled in a faster and low cost manner. Some prominent solutions use the flexibility of Wireless Sensor Networks and the meaningful description of Semantic Web Services to provide service integration. In this work, we give an overview of current solutions for machinery integration that combine both technologies as well as a discussion about some perspectives and open issues when applying Wireless Sensor Networks and Semantic Web Services for automation services integration.

  4. Web service activities at the IRIS DMC to support federated and multidisciplinary access

    NASA Astrophysics Data System (ADS)

    Trabant, Chad; Ahern, Timothy K.

    2013-04-01

    At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms

  5. OneGeology Web Services and Portal as a global geological SDI - latest standards and technology

    NASA Astrophysics Data System (ADS)

    Duffy, Tim; Tellez-Arenas, Agnes

    2014-05-01

    The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone

  6. Issues in implementing services for a wireless web-enabled digital camera

    NASA Astrophysics Data System (ADS)

    Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas

    2001-05-01

    The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.

  7. 76 FR 14034 - Proposed Collection; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-15

    ... Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035

  8. mORCA: ubiquitous access to life science web services.

    PubMed

    Diaz-Del-Pino, Sergio; Trelles, Oswaldo; Falgueras, Juan

    2018-01-16

    Technical advances in mobile devices such as smartphones and tablets have produced an extraordinary increase in their use around the world and have become part of our daily lives. The possibility of carrying these devices in a pocket, particularly mobile phones, has enabled ubiquitous access to Internet resources. Furthermore, in the life sciences world there has been a vast proliferation of data types and services that finish as Web Services. This suggests the need for research into mobile clients to deal with life sciences applications for effective usage and exploitation. Analysing the current features in existing bioinformatics applications managing Web Services, we have devised, implemented, and deployed an easy-to-use web-based lightweight mobile client. This client is able to browse, select, compose parameters, invoke, and monitor the execution of Web Services stored in catalogues or central repositories. The client is also able to deal with huge amounts of data between external storage mounts. In addition, we also present a validation use case, which illustrates the usage of the application while executing, monitoring, and exploring the results of a registered workflow. The software its available in the Apple Store and Android Market and the source code is publicly available in Github. Mobile devices are becoming increasingly important in the scientific world due to their strong potential impact on scientific applications. Bioinformatics should not fall behind this trend. We present an original software client that deals with the intrinsic limitations of such devices and propose different guidelines to provide location-independent access to computational resources in bioinformatics and biomedicine. Its modular design makes it easily expandable with the inclusion of new repositories, tools, types of visualization, etc.

  9. NaviCell Web Service for network-based data visualization

    PubMed Central

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei

    2015-01-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393

  10. Beyond accuracy: creating interoperable and scalable text-mining web services.

    PubMed

    Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong

    2016-06-15

    The biomedical literature is a knowledge-rich resource and an important foundation for future research. With over 24 million articles in PubMed and an increasing growth rate, research in automated text processing is becoming increasingly important. We report here our recently developed web-based text mining services for biomedical concept recognition and normalization. Unlike most text-mining software tools, our web services integrate several state-of-the-art entity tagging systems (DNorm, GNormPlus, SR4GN, tmChem and tmVar) and offer a batch-processing mode able to process arbitrary text input (e.g. scholarly publications, patents and medical records) in multiple formats (e.g. BioC). We support multiple standards to make our service interoperable and allow simpler integration with other text-processing pipelines. To maximize scalability, we have preprocessed all PubMed articles, and use a computer cluster for processing large requests of arbitrary text. Our text-mining web service is freely available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#curl : Zhiyong.Lu@nih.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  11. Research on SaaS and Web Service Based Order Tracking

    NASA Astrophysics Data System (ADS)

    Jiang, Jianhua; Sheng, Buyun; Gong, Lixiong; Yang, Mingzhong

    To solve the order tracking of across enterprises in Dynamic Virtual Enterprise (DVE), a SaaS and web service based order tracking solution was designed by analyzing the order management process in DVE. To achieve the system, the SaaS based architecture of data management on order tasks manufacturing states was constructed, and the encapsulation method of transforming application system into web service was researched. Then the process of order tracking in the system was given out. Finally, the feasibility of this study was verified by the development of a prototype system.

  12. Web Services Implementations at Land Process and Goddard Earth Sciences Distributed Active Archive Centers

    NASA Astrophysics Data System (ADS)

    Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.

    2007-12-01

    NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.

  13. A Web Service and Interface for Remote Electronic Device Characterization

    ERIC Educational Resources Information Center

    Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.

    2011-01-01

    A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…

  14. Sensor Webs with a Service-Oriented Architecture for On-demand Science Products

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Ungar, Stephen; Ames, Troy; Justice, Chris; Frye, Stuart; Chien, Steve; Tran, Daniel; Cappelaere, Patrice; Derezinsfi, Linda; Paules, Granville; hide

    2007-01-01

    This paper describes the work being managed by the NASA Goddard Space Flight Center (GSFC) Information System Division (ISD) under a NASA Earth Science Technology Ofice (ESTO) Advanced Information System Technology (AIST) grant to develop a modular sensor web architecture which enables discovery of sensors and workflows that can create customized science via a high-level service-oriented architecture based on Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) web service standards. These capabilities serve as a prototype to a user-centric architecture for Global Earth Observing System of Systems (GEOSS). This work builds and extends previous sensor web efforts conducted at NASA/GSFC using the Earth Observing 1 (EO-1) satellite and other low-earth orbiting satellites.

  15. Chapter 18: Web-based Tools - NED VO Services

    NASA Astrophysics Data System (ADS)

    Mazzarella, J. M.; NED Team

    The NASA/IPAC Extragalactic Database (NED) is a thematic, web-based research facility in widespread use by scientists, educators, space missions, and observatory operations for observation planning, data analysis, discovery, and publication of research about objects beyond our Milky Way galaxy. NED is a portal into a systematic fusion of data from hundreds of sky surveys and tens of thousands of research publications. The contents and services span the entire electromagnetic spectrum from gamma rays through radio frequencies, and are continuously updated to reflect the current literature and releases of large-scale sky survey catalogs. NED has been on the Internet since 1990, growing in content, automation and services with the evolution of information technology. NED is the world's largest database of crossidentified extragalactic objects. As of December 2006, the system contains approximately 10 million objects and 15 million multi-wavelength cross-IDs. Over 4 thousand catalogs and published lists covering the entire electromagnetic spectrum have had their objects cross-identified or associated, with fundamental data parameters federated for convenient queries and retrieval. This chapter describes the interoperability of NED services with other components of the Virtual Observatory (VO). Section 1 is a brief overview of the primary NED web services. Section 2 provides a tutorial for using NED services currently available through the NVO Registry. The "name resolver" provides VO portals and related internet services with celestial coordinates for objects specified by catalog identifier (name); any alias can be queried because this service is based on the source cross-IDs established by NED. All major services have been updated to provide output in VOTable (XML) format that can be accessed directly from the NED web interface or using the NVO registry. These include access to images via SIAP, Cone- Search queries, and services providing fundamental, multi

  16. UltiMatch-NL: A Web Service Matchmaker Based on Multiple Semantic Filters

    PubMed Central

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters. PMID:25157872

  17. UltiMatch-NL: a Web service matchmaker based on multiple semantic filters.

    PubMed

    Mohebbi, Keyvan; Ibrahim, Suhaimi; Zamani, Mazdak; Khezrian, Mojtaba

    2014-01-01

    In this paper, a Semantic Web service matchmaker called UltiMatch-NL is presented. UltiMatch-NL applies two filters namely Signature-based and Description-based on different abstraction levels of a service profile to achieve more accurate results. More specifically, the proposed filters rely on semantic knowledge to extract the similarity between a given pair of service descriptions. Thus it is a further step towards fully automated Web service discovery via making this process more semantic-aware. In addition, a new technique is proposed to weight and combine the results of different filters of UltiMatch-NL, automatically. Moreover, an innovative approach is introduced to predict the relevance of requests and Web services and eliminate the need for setting a threshold value of similarity. In order to evaluate UltiMatch-NL, the repository of OWLS-TC is used. The performance evaluation based on standard measures from the information retrieval field shows that semantic matching of OWL-S services can be significantly improved by incorporating designed matching filters.

  18. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  19. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    PubMed

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in

  20. Social and health dimensions of climate change in the Amazon.

    PubMed

    Brondízio, Eduardo S; de Lima, Ana C B; Schramski, Sam; Adams, Cristina

    2016-07-01

    The Amazon region has been part of climate change debates for decades, yet attention to its social and health dimensions has been limited. This paper assesses literature on the social and health dimensions of climate change in the Amazon. A conceptual framework underscores multiple stresses and exposures created by interactions between climate change and local social-environmental conditions. Using the Thomson-Reuter Web of Science, this study bibliometrically assessed the overall literature on climate change in the Amazon, including Physical Sciences, Social Sciences, Anthropology, Environmental Science/Ecology and Public, Environmental/Occupational Health. From this assessment, a relevant sub-sample was selected and complemented with literature from the Brazilian database SciELO. This sample discusses three dimensions of climate change impacts in the region: livelihood changes, vector-borne diseases and microbial proliferation, and respiratory diseases. This analysis elucidates imbalance and disconnect between ecological, physical and social and health dimensions of climate change and between continental and regional climate analysis, and sub-regional and local levels. Work on the social and health implications of climate change in the Amazon falls significantly behind other research areas, limiting reliable information for analytical models and for Amazonian policy-makers and society at large. Collaborative research is called for.

  1. Maintenance and Exchange of Learning Objects in a Web Services Based e-Learning System

    ERIC Educational Resources Information Center

    Vossen, Gottfried; Westerkamp, Peter

    2004-01-01

    "Web services" enable partners to exploit applications via the Internet. Individual services can be composed to build new and more complex ones with additional and more comprehensive functionality. In this paper, we apply the Web service paradigm to electronic learning, and show how to exchange and maintain learning objects is a…

  2. Service Learning and Building Community with the World Wide Web

    ERIC Educational Resources Information Center

    Longan, Michael W.

    2007-01-01

    The geography education literature touts the World Wide Web (Web) as a revolutionary educational tool, yet most accounts ignore its uses for public communication and creative expression. This article argues that students can be producers of content that is of service to local audiences. Drawing inspiration from the community networking movement,…

  3. Integrating Marine Observatories into a System-of-Systems: Messaging in the US Ocean Observatories Initiative

    DTIC Science & Technology

    2010-06-01

    Woods Hole, MA 02543, USA 3 Raytheon Intelligence and Information Systems, Aurora , CO 80011, USA 4 Scripps Institution of Oceanography, La Jolla...Amazon.com, Amazon Web Services for the Amazon Elastic Compute Cloud ( Amazon EC2). http://aws.amazon.com/ec2/. [4] M. Arrott, B. Demchak, V. Ermagan, C

  4. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  5. NaviCell Web Service for network-based data visualization.

    PubMed

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei

    2015-07-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. DAVID-WS: a stateful web service to facilitate gene/protein list analysis.

    PubMed

    Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A

    2012-07-01

    The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.

  7. A service relation model for web-based land cover change detection

    NASA Astrophysics Data System (ADS)

    Xing, Huaqiao; Chen, Jun; Wu, Hao; Zhang, Jun; Li, Songnian; Liu, Boyu

    2017-10-01

    Change detection with remotely sensed imagery is a critical step in land cover monitoring and updating. Although a variety of algorithms or models have been developed, none of them can be universal for all cases. The selection of appropriate algorithms and construction of processing workflows depend largely on the expertise of experts about the "algorithm-data" relations among change detection algorithms and the imagery data used. This paper presents a service relation model for land cover change detection by integrating the experts' knowledge about the "algorithm-data" relations into the web-based geo-processing. The "algorithm-data" relations are mapped into a set of web service relations with the analysis of functional and non-functional service semantics. These service relations are further classified into three different levels, i.e., interface, behavior and execution levels. A service relation model is then established using the Object and Relation Diagram (ORD) approach to represent the multi-granularity services and their relations for change detection. A set of semantic matching rules are built and used for deriving on-demand change detection service chains from the service relation model. A web-based prototype system is developed in .NET development environment, which encapsulates nine change detection and pre-processing algorithms and represents their service relations as an ORD. Three test areas from Shandong and Hebei provinces, China with different imagery conditions are selected for online change detection experiments, and the results indicate that on-demand service chains can be generated according to different users' demands.

  8. WEB-IS2: Next Generation Web Services Using Amira Visualization Package

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.

    2003-12-01

    Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the

  9. A study of an adaptive replication framework for orchestrated composite web services.

    PubMed

    Mohamed, Marwa F; Elyamany, Hany F; Nassar, Hamed M

    2013-01-01

    Replication is considered one of the most important techniques to improve the Quality of Services (QoS) of published Web Services. It has achieved impressive success in managing resource sharing and usage in order to moderate the energy consumed in IT environments. For a robust and successful replication process, attention should be paid to suitable time as well as the constraints and capabilities in which the process runs. The replication process is time-consuming since outsourcing some new replicas into other hosts is lengthy. Furthermore, nowadays, most of the business processes that might be implemented over the Web are composed of multiple Web services working together in two main styles: Orchestration and Choreography. Accomplishing a replication over such business processes is another challenge due to the complexity and flexibility involved. In this paper, we present an adaptive replication framework for regular and orchestrated composite Web services. The suggested framework includes a number of components for detecting unexpected and unhappy events that might occur when consuming the original published web services including failure or overloading. It also includes a specific replication controller to manage the replication process and select the best host that would encapsulate a new replica. In addition, it includes a component for predicting the incoming load in order to decrease the time needed for outsourcing new replicas, enhancing the performance greatly. A simulation environment has been created to measure the performance of the suggested framework. The results indicate that adaptive replication with prediction scenario is the best option for enhancing the performance of the replication process in an online business environment.

  10. Web Services as Building Blocks for an Open Coastal Observing System

    NASA Astrophysics Data System (ADS)

    Breitbach, G.; Krasemann, H.

    2012-04-01

    In coastal observing systems it is needed to integrate different observing methods like remote sensing, in-situ measurements, and models into a synoptic view of the state of the observed region. This integration can be based solely on web services combining data and metadata. Such an approach is pursued for COSYNA (Coastal Observing System for Northern and Artic seas). Data from satellite and radar remote sensing, measurements of buoys, stations and Ferryboxes are the observation part of COSYNA. These data are assimilated into models to create pre-operational forecasts. For discovering data an OGC Web Feature Service (WFS) is used by the COSYNA data portal. This Web Feature Service knows the necessary metadata not only for finding data, but in addition the URLs of web services to view and download the data. To make the data from different resources comparable a common vocabulary is needed. For COSYNA the standard names from CF-conventions are stored within the metadata whenever possible. For the metadata an INSPIRE and ISO19115 compatible data format is used. The WFS is fed from the metadata-system using database-views. Actual data are stored in two different formats, in NetCDF-files for gridded data and in an RDBMS for time-series-like data. The web service URLs are mostly standard based the standards are mainly OGC standards. Maps were created from netcdf files with the help of the ncWMS tool whereas a self-developed java servlet is used for maps of moving measurement platforms. In this case download of data is offered via OGC SOS. For NetCDF-files OPeNDAP is used for the data download. The OGC CSW is used for accessing extended metadata. The concept of data management in COSYNA will be presented which is independent of the special services used in COSYNA. This concept is parameter and data centric and might be useful for other observing systems.

  11. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  12. Programmatic access to data and information at the IRIS DMC via web services

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned

  13. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  14. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  15. WSGB: A Web Service-Based Growing Book

    ERIC Educational Resources Information Center

    Dow, C. R.; Huang, L. H.; Chen, K. H.; Chiu, J. C.; Lin, C. M.

    2006-01-01

    Growing Book refers to an electronic textbook that is co-developed, and has the ability to be constantly maintained, by groups of independent authors, thus creating a rich and ever-growing learning environment that can be conveniently accessible from anywhere. This work designs and implements a Web Service-based Growing Book that has the merits of…

  16. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  17. Job submission and management through web services: the experience with the CREAM service

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.

    2008-07-01

    Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.

  18. Using JavaScript and the FDSN web service to create an interactive earthquake information system

    NASA Astrophysics Data System (ADS)

    Fischer, Kasper D.

    2015-04-01

    The FDSN web service provides a web interface to access earthquake meta-data (e. g. event or station information) and waveform date over the internet. Requests are send to a server as URLs and the output is either XML or miniSEED. This makes it hard to read by humans but easy to process with different software. Different data centers are already supporting the FDSN web service, e. g. USGS, IRIS, ORFEUS. The FDSN web service is also part of the Seiscomp3 (http://www.seiscomp3.org) software. The Seismological Observatory of the Ruhr-University switched to Seiscomp3 as the standard software for the analysis of mining induced earthquakes at the beginning of 2014. This made it necessary to create a new web-based earthquake information service for the publication of results to the general public. This has be done by processing the output of a FDSN web service query by javascript running in a standard browser. The result is an interactive map presenting the observed events and further information of events and stations on a single web page as a table and on a map. In addition the user can download event information, waveform data and station data in different formats like miniSEED, quakeML or FDSNxml. The developed code and all used libraries are open source and freely available.

  19. Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.

    PubMed

    Troshin, Peter V; Procter, James B; Barton, Geoffrey J

    2011-07-15

    JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.

  20. The anatomy of a World Wide Web library service: the BONES demonstration project. Biomedically Oriented Navigator of Electronic Services.

    PubMed Central

    Schnell, E H

    1995-01-01

    In 1994, the John A. Prior Health Sciences Library at Ohio State University began to develop a World Wide Web demonstration project, the Biomedically Oriented Navigator of Electronic Services (BONES). The initial intent of BONES was to facilitate the health professional's access to Internet resources by organizing them in a systematic manner. The project not only met this goal but also helped identify the resources needed to launch a full-scale Web library service. This paper discusses the tasks performed and resources used in the development of BONES and describes the creation and organization of documents on the BONES Web server. The paper also discusses the outcomes of the project and the impact on the library's staff and services. PMID:8547903

  1. DAVID-WS: a stateful web service to facilitate gene/protein list analysis

    PubMed Central

    Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.

    2012-01-01

    Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366

  2. 76 FR 28439 - Submission for OMB Review; Comment Request; NCI Cancer Genetics Services Directory Web-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-17

    ...; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer... currently valid OMB control number. Proposed Collection: Title: NCI Cancer Genetics Services Directory Web... application form and the Web-based update mailer is to collect information about genetics professionals to be...

  3. Stakeholder Expectations of Service Quality in a University Web Portal

    NASA Astrophysics Data System (ADS)

    Tate, Mary; Evermann, Joerg; Hope, Beverley; Barnes, Stuart

    Online service quality is a much-studied concept. There is considerable evidence that user expectations and perceptions of self-service and online service quality differ in different business domains. In addition, the nature of online services is continually changing and universities have been at the forefront of this change, with university websites increasingly acting as a portal for a wide range of online transactions for a wide range of stakeholders. In this qualitative study, we conduct focus groups with a range of stakeholders in a university web portal. Our study offers a number of insights into the changing nature of the relationship between organisations and customers. New technologies are influencing customer expectations. Customers increasingly expect organisations to have integrated information systems, and to utilise new technologies such as SMS and web portals. Organisations can be slow to adopt a customer-centric viewpoint, and persist in providing interfaces that are inconsistent or require inside knowledge of organisational structures and processes. This has a negative effect on customer perceptions.

  4. 75 FR 57086 - Submission for Review: Federal Cyber Service: Scholarship for Service (SFS) Registration Web Site

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-17

    ... OFFICE OF PERSONNEL MANAGEMENT Submission for Review: Federal Cyber Service: Scholarship for Service (SFS) Registration Web Site AGENCY: Office of Personnel Management. ACTION: 30-Day Notice and request for comments. SUMMARY: The Office of Personnel Management (OPM), Human Resources Solutions...

  5. Development of a Dynamic Web Mapping Service for Vegetation Productivity Using Earth Observation and in situ Sensors in a Sensor Web Based Approach

    PubMed Central

    Kooistra, Lammert; Bergsma, Aldo; Chuma, Beatus; de Bruin, Sytze

    2009-01-01

    This paper describes the development of a sensor web based approach which combines earth observation and in situ sensor data to derive typical information offered by a dynamic web mapping service (WMS). A prototype has been developed which provides daily maps of vegetation productivity for the Netherlands with a spatial resolution of 250 m. Daily available MODIS surface reflectance products and meteorological parameters obtained through a Sensor Observation Service (SOS) were used as input for a vegetation productivity model. This paper presents the vegetation productivity model, the sensor data sources and the implementation of the automated processing facility. Finally, an evaluation is made of the opportunities and limitations of sensor web based approaches for the development of web services which combine both satellite and in situ sensor sources. PMID:22574019

  6. Rail-dbGaP: analyzing dbGaP-protected data in the cloud with Amazon Elastic MapReduce.

    PubMed

    Nellore, Abhinav; Wilks, Christopher; Hansen, Kasper D; Leek, Jeffrey T; Langmead, Ben

    2016-08-15

    Public archives contain thousands of trillions of bases of valuable sequencing data. More than 40% of the Sequence Read Archive is human data protected by provisions such as dbGaP. To analyse dbGaP-protected data, researchers must typically work with IT administrators and signing officials to ensure all levels of security are implemented at their institution. This is a major obstacle, impeding reproducibility and reducing the utility of archived data. We present a protocol and software tool for analyzing protected data in a commercial cloud. The protocol, Rail-dbGaP, is applicable to any tool running on Amazon Web Services Elastic MapReduce. The tool, Rail-RNA v0.2, is a spliced aligner for RNA-seq data, which we demonstrate by running on 9662 samples from the dbGaP-protected GTEx consortium dataset. The Rail-dbGaP protocol makes explicit for the first time the steps an investigator must take to develop Elastic MapReduce pipelines that analyse dbGaP-protected data in a manner compliant with NIH guidelines. Rail-RNA automates implementation of the protocol, making it easy for typical biomedical investigators to study protected RNA-seq data, regardless of their local IT resources or expertise. Rail-RNA is available from http://rail.bio Technical details on the Rail-dbGaP protocol as well as an implementation walkthrough are available at https://github.com/nellore/rail-dbgap Detailed instructions on running Rail-RNA on dbGaP-protected data using Amazon Web Services are available at http://docs.rail.bio/dbgap/ : anellore@gmail.com or langmea@cs.jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. EnviroAtlas - Austin, TX - Demographics by Block Group Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). This EnviroAtlas dataset is a summary of key demographic groups for the EnviroAtlas community. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  8. Utilization of services in a randomized trial testing phone- and web-based interventions for smoking cessation.

    PubMed

    Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E

    2011-05-01

    Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.

  9. 75 FR 20400 - Submission for Review: Federal Cyber Service: Scholarship for Service (SFS) Registration Web Site

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    ... OFFICE OF PERSONNEL MANAGEMENT Submission for Review: Federal Cyber Service: Scholarship for Service (SFS) Registration Web Site AGENCY: U.S. Office of Personnel Management. ACTION: 60-Day Notice and request for comments. SUMMARY: The Human Resources Solutions, Office of Personnel Management (OPM) offers...

  10. Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.

    2015-12-01

    Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.

  11. Visualization of data in radiotherapy using web services for optimization of workflow.

    PubMed

    Kirrmann, Stefan; Gainey, Mark; Röhner, Fred; Hall, Markus; Bruggmoser, Gregor; Schmucker, Marianne; Heinemann, Felix E

    2015-01-20

    Every day a large amount of data is produced within a radiotherapy department. Although this data is available in one form or other within the centralised systems, it is often not in the form which is of interest to the departmental staff. This work presents a flexible browser based reporting and visualization system for clinical and scientific use, not currently found in commercially available software such as MOSAIQ(TM) or ARIA(TM). Moreover, the majority of user merely wish to retrieve data and not record and/or modify data. Thus the idea was conceived, to present the user with all relevant information in a simple and effective manner in the form of web-services. Due to the widespread availability of the internet, most people can master the use of a web-browser. Ultimately the aim is to optimize clinical procedures, enhance transparency and improve revenue. Our working group (BAS) examined many internal procedures, to find out whether relevant information suitable for our purposes lay therein. After the results were collated, it was necessary to select an effective software platform. After a more detailed analysis of all data, it became clear that the implementation of web-services was appropriate. In our institute several such web-based information services had already been developed over the last few years, with which we gained invaluable experience. Moreover, we strived for high acceptance amongst staff members. By employing web-services, we attained high effectiveness, transparency and efficient information processing for the user. Furthermore, we achieved an almost maintenance-free and low support system. The aim of the project, making web-based information available to the user from the departmental system MOSAIQ, physician letter system MEDATEC(R) and the central finding server MiraPlus (laboratory, pathology and radiology) were implemented without restrictions. Due to widespread use of web-based technology the training effort was effectively nil, since

  12. Using secure web services to visualize poison center data for nationwide biosurveillance: a case study.

    PubMed

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance.

  13. The Plate Boundary Observatory: Community Focused Web Services

    NASA Astrophysics Data System (ADS)

    Matykiewicz, J.; Anderson, G.; Lee, E.; Hoyt, B.; Hodgkinson, K.; Persson, E.; Wright, J.; Torrez, D.; Jackson, M.

    2006-12-01

    The Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 852 continuous GPS stations, 103 borehole strainmeter stations, 28 tiltmeters, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of channels, including map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.

  14. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide

  15. Utilization of Services in a Randomized Trial Testing Phone- and Web-Based Interventions for Smoking Cessation

    PubMed Central

    Jack, Lisa M.; McClure, Jennifer B.; Deprey, Mona; Javitz, Harold S.; McAfee, Timothy A.; Catz, Sheryl L.; Richards, Julie; Bush, Terry; Swan, Gary E.

    2011-01-01

    Introduction: Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone–Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. Methods: One thousand two hundred and two participants were randomized to phone, Web, or combined phone–Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Results: Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone–Web, 41% Web), and those in the phone–Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Conclusions: Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities. PMID:21330267

  16. How the new web generations are changing library and information services.

    PubMed

    Miranda, Giovanna F; Gualtieri, Francesca; Coccia, Paolo

    2010-04-01

    The new Web generations are influencing the minds and changing the habits of software developers and end users. Users, librarians, and information services professionals can interact more efficiently, creating additional information and content and generating knowledge. This new scenario is also changing the behavior of information providers and users in health sciences libraries. This article reviews the new Web environments and tools that give librarians opportunities to tailor their services better, and gives some examples of the advantages and disadvantages for them and their users. Librarians need to adapt to the new mindset of users, linking new technologies, information, and people.

  17. Secure password-based authenticated key exchange for web services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Fang; Meder, Samuel; Chevassut, Olivier

    This paper discusses an implementation of an authenticated key-exchange method rendered on message primitives defined in the WS-Trust and WS-SecureConversation specifications. This IEEE-specified cryptographic method (AuthA) is proven-secure for password-based authentication and key exchange, while the WS-Trust and WS-Secure Conversation are emerging Web Services Security specifications that extend the WS-Security specification. A prototype of the presented protocol is integrated in the WSRF-compliant Globus Toolkit V4. Further hardening of the implementation is expected to result in a version that will be shipped with future Globus Toolkit releases. This could help to address the current unavailability of decent shared-secret-based authentication options inmore » the Web Services and Grid world. Future work will be to integrate One-Time-Password (OTP) features in the authentication protocol.« less

  18. EnviroAtlas - Ecosystem Services Market-Based Programs Web Service, U.S., 2016, Forest Trends' Ecosystem Marketplace

    EPA Pesticide Factsheets

    This EnviroAtlas web service contains layers depicting market-based programs and projects addressing ecosystem services protection in the United States. Layers include data collected via surveys and desk research conducted by Forest Trends' Ecosystem Marketplace from 2008 to 2016 on biodiversity (i.e., imperiled species/habitats; wetlands and streams), carbon, and water markets and enabling conditions that facilitate, directly or indirectly, market-based approaches to protecting and investing in those ecosystem services. This dataset was produced by Forest Trends' Ecosystem Marketplace for EnviroAtlas in order to support public access to and use of information related to environmental markets. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  19. Services for Graduate Students: A Review of Academic Library Web Sites

    ERIC Educational Resources Information Center

    Rempel, Hannah Gascho

    2010-01-01

    A library's Web site is well recognized as the gateway to the library for the vast majority of users. Choosing the most user-friendly Web architecture to reflect the many services libraries offer is a complex process, and librarians are still experimenting to find what works best for their users. As part of a redesign of the Oregon State…

  20. Geospatial Web Services in Real Estate Information System

    NASA Astrophysics Data System (ADS)

    Radulovic, Aleksandra; Sladic, Dubravka; Govedarica, Miro; Popovic, Dragana; Radovic, Jovana

    2017-12-01

    Since the data of cadastral records are of great importance for the economic development of the country, they must be well structured and organized. Records of real estate on the territory of Serbia met many problems in previous years. To prevent problems and to achieve efficient access, sharing and exchange of cadastral data on the principles of interoperability, domain model for real estate is created according to current standards in the field of spatial data. The resulting profile of the domain model for the Serbian real estate cadastre is based on the current legislation and on Land Administration Domain Model (LADM) which is specified in the ISO19152 standard. Above such organized data, and for their effective exchange, it is necessary to develop a model of services that must be provided by the institutions interested in the exchange of cadastral data. This is achieved by introducing a service-oriented architecture in the information system of real estate cadastre and with that ensures efficiency of the system. It is necessary to develop user services for download, review and use of the real estate data through the web. These services should be provided to all users who need access to cadastral data (natural and legal persons as well as state institutions) through e-government. It is also necessary to provide search, view and download of cadastral spatial data by specifying geospatial services. Considering that real estate contains geometric data for parcels and buildings it is necessary to establish set of geospatial services that would provide information and maps for the analysis of spatial data, and for forming a raster data. Besides the theme Cadastral parcels, INSPIRE directive specifies several themes that involve data on buildings and land use, for which data can be provided from real estate cadastre. In this paper, model of geospatial services in Serbia is defined. A case study of using these services to estimate which household is at risk of

  1. WebDISCO: a web service for distributed cox model learning without patient-level data sharing.

    PubMed

    Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2015-11-01

    The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions

  2. Semantic Web Service Delivery in Healthcare Based on Functional and Non-Functional Properties.

    PubMed

    Schweitzer, Marco; Gorfer, Thilo; Hörbst, Alexander

    2017-01-01

    In the past decades, a lot of endeavor has been made on the trans-institutional exchange of healthcare data through electronic health records (EHR) in order to obtain a lifelong, shared accessible health record of a patient. Besides basic information exchange, there is a growing need for Information and Communication Technology (ICT) to support the use of the collected health data in an individual, case-specific workflow-based manner. This paper presents the results on how workflows can be used to process data from electronic health records, following a semantic web service approach that enables automatic discovery, composition and invocation of suitable web services. Based on this solution, the user (physician) can define its needs from a domain-specific perspective, whereas the ICT-system fulfills those needs with modular web services. By involving also non-functional properties for the service selection, this approach is even more suitable for the dynamic medical domain.

  3. Design, Implementation and Applications of 3d Web-Services in DB4GEO

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Kuper, P. V.; Dittrich, A.; Wild, P.; Butwilowski, E.; Al-Doori, M.

    2013-09-01

    The object-oriented database architecture DB4GeO was originally designed to support sub-surface applications in the geo-sciences. This is reflected in DB4GeO's geometric data model as well as in its import and export functions. Initially, these functions were designed for communication with 3D geological modeling and visualization tools such as GOCAD or MeshLab. However, it soon became clear that DB4GeO was suitable for a much wider range of applications. Therefore it is natural to move away from a standalone solution and to open the access to DB4GeO data by standardized OGC web-services. Though REST and OGC services seem incompatible at first sight, the implementation in DB4GeO shows that OGC-based implementation of web-services may use parts of the DB4GeO-REST implementation. Starting with initial solutions in the history of DB4GeO, this paper will introduce the design, adaptation (i.e. model transformation), and first steps in the implementation of OGC Web Feature (WFS) and Web Processing Services (WPS), as new interfaces to DB4GeO data and operations. Among its capabilities, DB4GeO can provide data in different data formats like GML, GOCAD, or DB3D XML through a WFS, as well as its ability to run operations like a 3D-to-2D service, or mesh-simplification (Progressive Meshes) through a WPS. We then demonstrate, an Android-based mobile 3D augmented reality viewer for DB4GeO that uses the Web Feature Service to visualize 3D geo-database query results. Finally, we explore future research work considering DB4GeO in the framework of the research group "Computer-Aided Collaborative Subway Track Planning in Multi-Scale 3D City and Building Models".

  4. Cross-Dataset Analysis and Visualization Driven by Expressive Web Services

    NASA Astrophysics Data System (ADS)

    Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad

    2015-04-01

    The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization

  5. UkrVO astronomical WEB services

    NASA Astrophysics Data System (ADS)

    Mazhaev, A.

    2017-02-01

    Ukraine Virtual Observatory (UkrVO) has been a member of the International Virtual Observatory Alliance (IVOA) since 2011. The virtual observatory (VO) is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS) of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.

  6. Consistent data recording across a health system and web-enablement allow service quality comparisons: online data for commissioning dermatology services.

    PubMed

    Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon

    2012-01-01

    A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.

  7. Technical note: Harmonizing met-ocean model data via standard web services within small research groups

    NASA Astrophysics Data System (ADS)

    Signell, R. P.; Camossi, E.

    2015-11-01

    Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.

  8. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  9. Description and testing of the Geo Data Portal: Data integration framework and Web processing services for environmental science collaboration

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.

    2011-01-01

    Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.

  10. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    PubMed

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  11. Using Secure Web Services to Visualize Poison Center Data for Nationwide Biosurveillance: A Case Study

    PubMed Central

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M. Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Objectives Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. Methods As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Results Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. Discussion This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Conclusion Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance. PMID:23569581

  12. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows

    PubMed Central

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production. PMID:26068216

  13. AWSCS-A System to Evaluate Different Approaches for the Automatic Composition and Execution of Web Services Flows.

    PubMed

    Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José

    2015-01-01

    This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.

  14. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    PubMed

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  15. Clinical Predictive Modeling Development and Deployment through FHIR Web Services

    PubMed Central

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207

  16. Implementation of Web 2.0 services in academic, medical and research libraries: a scoping review.

    PubMed

    Gardois, Paolo; Colombi, Nicoletta; Grillo, Gaetano; Villanacci, Maria C

    2012-06-01

    Academic, medical and research libraries frequently implement Web 2.0 services for users. Several reports notwithstanding, characteristics and effectiveness of services are unclear. To find out: the Web 2.0 services implemented by medical, academic and research libraries; study designs, measures and types of data used in included articles to evaluate effectiveness; whether the identified body of literature is amenable to a systematic review of results. Scoping review mapping the literature on the topic. Searches were performed in 19 databases. research articles in English, Italian, German, French and Spanish (publication date ≥ 2006) about Web 2.0 services for final users implemented by academic, medical and research libraries. Reviewers' agreement was measured by Cohen's kappa. From a data set of 6461 articles, 255 (4%) were coded and analysed. Conferencing/chat/instant messaging, blogging, podcasts, social networking, wikis and aggregators were frequently examined. Services were mainly targeted at general academic users of English-speaking countries. Data prohibit a reliable estimate of the relative frequency of implemented Web 2.0 services. Case studies were the prevalent design. Most articles evaluated different outcomes using diverse assessment methodologies. A systematic review is recommended to assess the effectiveness of such services. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  17. BioXSD: the common data-exchange format for everyday bioinformatics web services.

    PubMed

    Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-09-15

    The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.

  18. API REST Web service and backend system Of Lecturer’s Assessment Information System on Politeknik Negeri Bali

    NASA Astrophysics Data System (ADS)

    Manuaba, I. B. P.; Rudiastini, E.

    2018-01-01

    Assessment of lecturers is a tool used to measure lecturer performance. Lecturer’s assessment variable can be measured from three aspects : teaching activities, research and community service. Broad aspect to measure the performance of lecturers requires a special framework, so that the system can be developed in a sustainable manner. Issues of this research is to create a API web service data tool, so the lecturer assessment system can be developed in various frameworks. The research was developed with web service and php programming language with the output of json extension data. The conclusion of this research is API web service data application can be developed using several platforms such as web, mobile application

  19. How the OCLC CORC Service Is Helping Weave Libraries into the Web.

    ERIC Educational Resources Information Center

    Covert, Kay

    2001-01-01

    Describes OCLC's CORC (Cooperative Online Resource Catalog) service. As a state-of-the-art Web-based metadata creation system, CORC is optimized for creating bibliographic records and pathfinders for electronic resources. Discusses how libraries are using CORC in technical services, public services, and collection development and explains the…

  20. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web

  1. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This

  2. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins.

    PubMed

    Konc, Janez; Janezic, Dusanka

    2012-07-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si.

  3. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins

    PubMed Central

    Konc, Janez; Janežič, Dušanka

    2012-01-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si. PMID:22600737

  4. Increasing the availability and usability of terrestrial ecology data through geospatial Web services and visualization tools (Invited)

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2010-12-01

    Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web

  5. Act No. 24994 of 19 January 1989. Basic Law on the Rural Development of the Peruvian Amazon Region.

    PubMed

    1989-01-01

    This Act sets forth the government's policy on rural development of the Peruvian Amazon region. Major objectives of the Act include the promotion of new rural settlements in the Amazon region, the promotion of migration from the Andes to the Amazon region, and the stimulation of agriculture, livestock, and forestry activities in the Amazon region. The following are the means that the government will use, among others, to attain these goals: 1) the development of Population Displacement Programmes, which will give individual persons and families economic and logistic support in moving; 2) the establishment of Civic Colonizing Services, temporary mobile units, which will offer settlers health services, education services, technical assistance with respect to agriculture and livestock, and promotional credits; 3) the creation of the Council for Amazon River Transport to coordinate and recommend activities to improve river transport; 4) the granting to settlers of land, free education for their children, medical care, technical training and assistance with respect to agriculture, and a supply of seeds; 5) the exemption of certain investors from payment of income taxes; and 6) the granting of a wide range of incentives for agricultural production. The Act also creates a Council for Planning and Development in the Amazon Region to draw up and approve a Plan for the Development of the Amazon Region. It calls for the rational use of the natural resources of the Amazon Region in the framework of preserving the ecosystem and preventing its ruin and delegates to the regional governments the authority to enter into contracts on the use of forest materials and to undertake reforestation programs. Finally, the Act provides various guarantees for the native population, including guarantees with respect to land and preservation of ethnic and social identity.

  6. Diy Geospatial Web Service Chains: Geochaining Make it Easy

    NASA Astrophysics Data System (ADS)

    Wu, H.; You, L.; Gui, Z.

    2011-08-01

    It is a great challenge for beginners to create, deploy and utilize a Geospatial Web Service Chain (GWSC). People in Computer Science are usually not familiar with geospatial domain knowledge. Geospatial practitioners may lack the knowledge about web services and service chains. The end users may lack both. However, integrated visual editing interfaces, validation tools, and oneclick deployment wizards may help to lower the learning curve and improve modelling skills so beginners will have a better experience. GeoChaining is a GWSC modelling tool designed and developed based on these ideas. GeoChaining integrates visual editing, validation, deployment, execution etc. into a unified platform. By employing a Virtual Globe, users can intuitively visualize raw data and results produced by GeoChaining. All of these features allow users to easily start using GWSC, regardless of their professional background and computer skills. Further, GeoChaining supports GWSC model reuse, meaning that an entire GWSC model created or even a specific part can be directly reused in a new model. This greatly improves the efficiency of creating a new GWSC, and also contributes to the sharing and interoperability of GWSC.

  7. A web service framework for astronomical remote observation in Antarctica by using satellite link

    NASA Astrophysics Data System (ADS)

    Jia, M.-h.; Chen, Y.-q.; Zhang, G.-y.; Jiang, P.; Zhang, H.; Wang, J.

    2018-07-01

    Many telescopes are deployed in Antarctica as it offers excellent astronomical observation conditions. However, because Antarctica's environment is harsh to humans, remote operation of telescope is necessary for observation. Furthermore, communication to devices in Antarctica through satellite link with low bandwidth and high latency limits the effectiveness of remote observation. This paper introduces a web service framework for remote astronomical observation in Antarctica. The framework is based on Python Tornado. RTS2-HTTPD and REDIS are used as the access interface to the telescope control system in Antarctica. The web service provides real-time updates through WebSocket. To improve user experience and control effectiveness under the poor satellite link condition, an agent server is deployed in the mainland to synchronize the Antarctic server's data and send it to domestic users in China. The agent server will forward the request of domestic users to the Antarctic master server. The web service was deployed and tested on Bright Star Survey Telescope (BSST) in Antarctica. Results show that the service meets the demands of real-time, multiuser remote observation and domestic users have a better experience of remote operation.

  8. First evaluation of the NHS direct online clinical enquiry service: a nurse-led web chat triage service for the public.

    PubMed

    Eminovic, Nina; Wyatt, Jeremy C; Tarpey, Aideen M; Murray, Gerard; Ingrams, Grant J

    2004-06-02

    NHS Direct is a telephone triage service used by the UK public to contact a nurse for any kind of health problem. NHS Direct Online (NHSDO) extends NHS Direct, allowing the telephone to be replaced by the Internet, and introducing new opportunities for informing patients about their health. One NHSDO service under development is the Clinical Enquiry Service (CES), which uses Web chat as the communication medium. To identify the opportunities and possible risks of such a service by exploring its safety, feasibility, and patient perceptions about using Web chat to contact a nurse. During a six-day pilot performed in an inner-city general practice in Coventry, non-urgent patients attending their GP were asked to test the service. After filling out three Web forms, patients used a simple Web chat application to communicate with trained NHS Direct triage nurses, who responded with appropriate triage advice. All patients were seen by their GP immediately after using the Web chat service. Safety was explored by comparing the nurse triage end point with the GP's recommended end point. In order to check the feasibility of the service, we measured the duration of the chat session. Patient perceptions were measured before and after using the service through a modified Telemedicine Perception Questionnaire (TMPQ) instrument. All patients were observed by a researcher who captured any comments and, if necessary, to assisted with the process. A total of 25 patients (mean age 48 years; 57% female) agreed to participate in the study. An exact match between the nurse and the GP end point was found in 45% (10/22) of cases. In two cases, the CES nurse proposed a less urgent end point than the GP. The median duration of Web chat sessions was 30 minutes, twice the median for NHS Direct telephone calls for 360 patients with similar presenting problems. There was a significant improvement in patients' perception of CES after using the service (mean pre-test TMPQ score 44/60, post-test 49

  9. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing

  10. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Experimenting with semantic web services to understand the role of NLP technologies in healthcare.

    PubMed

    Jagannathan, V

    2006-01-01

    NLP technologies can play a significant role in healthcare where a predominant segment of the clinical documentation is in text form. In a graduate course focused on understanding semantic web services at West Virginia University, a class project was designed with the purpose of exploring potential use for NLP-based abstraction of clinical documentation. The role of NLP-technology was simulated using human abstractors and various workflows were investigated using public domain workflow and semantic web service technologies. This poster explores the potential use of NLP and the role of workflow and semantic web technologies in developing healthcare IT environments.

  12. A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis

    PubMed Central

    Guardia, Gabriela D. A.; Pires, Luís Ferreira; Vêncio, Ricardo Z. N.; Malmegrim, Kelen C. R.; de Farias, Cléver R. G.

    2015-01-01

    Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis. PMID:26207740

  13. A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis.

    PubMed

    Guardia, Gabriela D A; Pires, Luís Ferreira; Vêncio, Ricardo Z N; Malmegrim, Kelen C R; de Farias, Cléver R G

    2015-01-01

    Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis.

  14. Insights for conducting real-time focus groups online using a web conferencing service.

    PubMed

    Kite, James; Phongsavan, Philayrath

    2017-01-01

    Background Online focus groups have been increasing in use over the last 2 decades, including in biomedical and health-related research. However, most of this research has made use of text-based services such as email, discussion boards, and chat rooms, which do not replicate the experience of face-to-face focus groups. Web conferencing services have the potential to more closely match the face-to-face focus group experience, including important visual and aural cues. This paper provides critical reflections on using a web conferencing service to conduct online focus groups. Methods As part of a broader study, we conducted both online and face-to-face focus groups with participants. The online groups were conducted in real-time using the web conferencing service, Blackboard Collaborate TM . We used reflective practice to assess how the conduct and content of the groups were similar and how they differed across the two platforms. Results We found that further research using such services is warranted, particularly when working with hard-to-reach or geographically dispersed populations. The level of discussion and the quality of the data obtained was similar to that found in face-to-face groups. However, some issues remain, particularly in relation to managing technical issues experienced by participants and ensuring adequate recording quality to facilitate transcription and analysis. Conclusions Our experience with using web conferencing for online focus groups suggests that they have the potential to offer a realistic and comparable alternative to face-to-face focus groups, especially for geographically dispersed populations such as rural and remote health practitioners. Further testing of these services is warranted but researchers should carefully consider the service they use to minimise the impact of technical difficulties.

  15. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    PubMed

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  16. A snapshot of 3649 Web-based services published between 1994 and 2017 shows a decrease in availability after 2 years.

    PubMed

    Osz, Ágnes; Pongor, Lorinc Sándor; Szirmai, Danuta; Gyorffy, Balázs

    2017-12-08

    The long-term availability of online Web services is of utmost importance to ensure reproducibility of analytical results. However, because of lack of maintenance following acceptance, many servers become unavailable after a short period of time. Our aim was to monitor the accessibility and the decay rate of published Web services as well as to determine the factors underlying trends changes. We searched PubMed to identify publications containing Web server-related terms published between 1994 and 2017. Automatic and manual screening was used to check the status of each Web service. Kruskall-Wallis, Mann-Whitney and Chi-square tests were used to evaluate various parameters, including availability, accessibility, platform, origin of authors, citation, journal impact factor and publication year. We identified 3649 publications in 375 journals of which 2522 (69%) were currently active. Over 95% of sites were running in the first 2 years, but this rate dropped to 84% in the third year and gradually sank afterwards (P < 1e-16). The mean half-life of Web services is 10.39 years. Working Web services were published in journals with higher impact factors (P = 4.8e-04). Services published before the year 2000 received minimal attention. The citation of offline services was less than for those online (P = 0.022). The majority of Web services provide analytical tools, and the proportion of databases is slowly decreasing. Conclusions. Almost one-third of Web services published to date went out of service. We recommend continued support of Web-based services to increase the reproducibility of published results. © The Author 2017. Published by Oxford University Press.

  17. BioXSD: the common data-exchange format for everyday bioinformatics web services

    PubMed Central

    Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: matus.kalas@bccs.uib.no; developers@bioxsd.org; support@bioxsd.org PMID:20823319

  18. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    PubMed Central

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  19. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    PubMed

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  20. Sun Protection Belief Clusters: Analysis of Amazon Mechanical Turk Data.

    PubMed

    Santiago-Rivas, Marimer; Schnur, Julie B; Jandorf, Lina

    2016-12-01

    This study aimed (i) to determine whether people could be differentiated on the basis of their sun protection belief profiles and individual characteristics and (ii) explore the use of a crowdsourcing web service for the assessment of sun protection beliefs. A sample of 500 adults completed an online survey of sun protection belief items using Amazon Mechanical Turk. A two-phased cluster analysis (i.e., hierarchical and non-hierarchical K-means) was utilized to determine clusters of sun protection barriers and facilitators. Results yielded three distinct clusters of sun protection barriers and three distinct clusters of sun protection facilitators. Significant associations between gender, age, sun sensitivity, and cluster membership were identified. Results also showed an association between barrier and facilitator cluster membership. The results of this study provided a potential alternative approach to developing future sun protection promotion initiatives in the population. Findings add to our knowledge regarding individuals who support, oppose, or are ambivalent toward sun protection and inform intervention research by identifying distinct subtypes that may best benefit from (or have a higher need for) skin cancer prevention efforts.

  1. Technical note: Harmonising metocean model data via standard web services within small research groups

    NASA Astrophysics Data System (ADS)

    Signell, Richard P.; Camossi, Elena

    2016-05-01

    Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.

  2. Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D

    2007-10-01

    Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.

  3. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    PubMed

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  4. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    PubMed Central

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-01-01

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647

  5. An Approach for Web Service Selection Based on Confidence Level of Decision Maker

    PubMed Central

    Khezrian, Mojtaba; Jahan, Ali; Wan Kadir, Wan Mohd Nasir; Ibrahim, Suhaimi

    2014-01-01

    Web services today are among the most widely used groups for Service Oriented Architecture (SOA). Service selection is one of the most significant current discussions in SOA, which evaluates discovered services and chooses the best candidate from them. Although a majority of service selection techniques apply Quality of Service (QoS), the behaviour of QoS-based service selection leads to service selection problems in Multi-Criteria Decision Making (MCDM). In the existing works, the confidence level of decision makers is neglected and does not consider their expertise in assessing Web services. In this paper, we employ the VIKOR (VIšekriterijumskoKOmpromisnoRangiranje) method, which is absent in the literature for service selection, but is well-known in other research. We propose a QoS-based approach that deals with service selection by applying VIKOR with improvement of features. This research determines the weights of criteria based on user preference and accounts for the confidence level of decision makers. The proposed approach is illustrated by an example in order to demonstrate and validate the model. The results of this research may facilitate service consumers to attain a more efficient decision when selecting the appropriate service. PMID:24897426

  6. Exploring weight loss services in primary care and staff views on using a web-based programme.

    PubMed

    Ware, Lisa J; Williams, Sarah; Bradbury, Katherine; Brant, Catherine; Little, Paul; Hobbs, F D Richard; Yardley, Lucy

    2012-01-01

    Demand is increasing for primary care to deliver effective weight management services to patients, but research suggests that staff feel inadequately resourced for such a role. Supporting service delivery with a free and effective web-based weight management programme could maximise primary care resource and provide cost-effective support for patients. However, integration of e-health into primary care may face challenges. To explore primary care staff experiences of delivering weight management services and their perceptions of a web-based weight management programme to aid service delivery. Focus groups were conducted with primary care physicians, nurses and healthcare assistants (n = 36) involved in delivering weight loss services. Data were analysed using inductive thematic analysis. Participants thought that primary care should be involved in delivering weight management, especially when weight was aggravating health problems. However, they felt under-resourced to deliver these services and unsure as to the effectiveness of their input, as routine services were not evaluated. Beliefs that current services were ineffective resulted in staff reluctance to allocate more resources. Participants were hopeful that supplementing practice with a web-based weight management programme would enhance patient services and promote service evaluation. Although primary care staff felt they should deliver weight loss services, low levels of faith in the efficacy of current treatments resulted in provision of under-resourced and 'ad hoc' services. Integration of a web-based weight loss programme that promotes service evaluation and provides a cost-effective option for supporting patients may encourage practices to invest more in weight management services.

  7. ChEMBL web services: streamlining access to drug discovery data and utilities

    PubMed Central

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.

    2015-01-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136

  8. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  9. Web-Based Course Management and Web Services

    ERIC Educational Resources Information Center

    Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.

    2004-01-01

    The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…

  10. Taking advantage of Google's Web-based applications and services.

    PubMed

    Brigham, Tara J

    2014-01-01

    Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.

  11. Towards Semantic Web Services on Large, Multi-Dimensional Coverages

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2009-04-01

    Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it

  12. Living Rivers: Importance of Andes-Amazon Connectivity and Consequences of Hydropower Development

    NASA Astrophysics Data System (ADS)

    Anderson, E.

    2016-12-01

    The inherent dynamism of rivers along elevational and longitudinal gradients underpins freshwater biodiversity, ecosystem function, and ecosystem services in the Andean-Amazon. While this region covers only a small part of the entire Amazon Basin, its influences on downstream ecology, biogeochemistry, and human wellbeing are disproportionate with its relative small size. Seasonal flow pulses from Andean rivers maintain habitat, signal migratory fishes, and export sediment, nutrients, and organic matter to distant ecosystems—like lowland Amazonia and the Atlantic coast of Brazil. Rivers are key transportation routes, and freshwater fisheries are a primary protein source for the >30 million people that inhabit the Amazon Basin. Numerous cultural traditions depend on free-flowing Andean rivers; examples include Kukama beliefs in the underwater cities of the Marañon River, where people who have drowned in rivers whose bodies are not recovered go to live, or the pre-dawn bathing rituals of the Peruvian Shawi, who gain energy and connect with ancestors in cold, fast-flowing Andean waters. Transformations in the Andean-Amazon landscape—in particular from dams—threaten to compromise flows critical for human and ecosystem wellbeing. Presently, at least 250 hydropower dams are in operation, under construction, or proposed for Andean-Amazon rivers. This presentation will discuss regional trends in hydropower development, quantify effects of existing and proposed dams on Andean-Amazon connectivity, and examine the social and cultural importance of free-flowing Andean-Amazon rivers.

  13. Service quality of Early Childhood Education web portals in Finnish municipalities

    NASA Astrophysics Data System (ADS)

    Koskivaara, Eija; Pihlaja, Päivi

    Increasing number of governmental organizations have transformed material on their web sites as a way of providing users with information about their products and services. In this paper, we apply Yang et al (2005) instrument for analyzing municipal early childhood education (ECE) web sites in Finland. The objective of the study was to find out the quality of ECE web portals as well as to give hints to improve their value from users' point of view. In general the five dimensions, usability, usefulness of content, adequacy of information, accessibility, and interaction, of the Yang et al model seems to be applicable also in the early childhood education environment.

  14. UniPrime2: a web service providing easier Universal Primer design.

    PubMed

    Boutros, Robin; Stokes, Nicola; Bekaert, Michaël; Teeling, Emma C

    2009-07-01

    The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.

  15. EnviroAtlas - NHDPlus V2 Hydrologic Unit Boundaries Web Service - Conterminous United States

    EPA Pesticide Factsheets

    This EnviroAtlas web service contains layers depicting hydrologic unit boundary layers and labels for the Subregion level (4-digit HUCs), Subbasin level (8-digit HUCs), and Subwatershed level (12-digit HUCs) for the conterminous United States. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  16. Enriching the Web Processing Service

    NASA Astrophysics Data System (ADS)

    Wosniok, Christoph; Bensmann, Felix; Wössner, Roman; Kohlus, Jörn; Roosmann, Rainer; Heidmann, Carsten; Lehfeldt, Rainer

    2014-05-01

    The OGC Web Processing Service (WPS) provides a standard for implementing geospatial processes in service-oriented networks. In its current version 1.0.0 it allocates the operations GetCapabilities, DescribeProcess and Execute, which can be used to offer custom processes based on single or multiple sub-processes. A large range of ready to use fine granular, fundamental geospatial processes have been developed by the GIS-community in the past. However, modern use cases or whole workflow processes demand specifications of lifecycle management and service orchestration. Orchestrating smaller sub-processes is a task towards interoperability; a comprehensive documentation by using appropriate metadata is also required. Though different approaches were tested in the past, developing complex WPS applications still requires programming skills, knowledge about software libraries in use and a lot of effort for integration. Our toolset RichWPS aims at providing a better overall experience by setting up two major components. The RichWPS ModelBuilder enables the graphics-aided design of workflow processes based on existing local and distributed processes and geospatial services. Once tested by the RichWPS Server, a composition can be deployed for production use on the RichWPS Server. The ModelBuilder obtains necessary processes and services from a directory service, the RichWPS semantic proxy. It manages the lifecycle and is able to visualize results and debugging-information. One aim will be to generate reproducible results; the workflow should be documented by metadata that can be integrated in Spatial Data Infrastructures. The RichWPS Server provides a set of interfaces to the ModelBuilder for, among others, testing composed workflow sequences, estimating their performance and to publish them as common processes. Therefore the server is oriented towards the upcoming WPS 2.0 standard and its ability to transactionally deploy and undeploy processes making use of a WPS

  17. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-11-28

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  18. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-03-01

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. The all local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  19. Towards an EO-based Landslide Web Mapping and Monitoring Service

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Weinke, Elisabeth; Albrecht, Florian; Eisank, Clemens; Vecchiotti, Filippo; Friedl, Barbara; Kociu, Arben

    2017-04-01

    National and regional authorities and infrastructure maintainers in mountainous regions require accurate knowledge of the location and spatial extent of landslides for hazard and risk management. Information on landslides is often collected by a combination of ground surveying and manual image interpretation following landslide triggering events. However, the high workload and limited time for data acquisition result in a trade-off between completeness, accuracy and detail. Remote sensing data offers great potential for mapping and monitoring landslides in a fast and efficient manner. While facing an increased availability of high-quality Earth Observation (EO) data and new computational methods, there is still a lack in science-policy interaction and in providing innovative tools and methods that can easily be used by stakeholders and users to support their daily work. Taking up this issue, we introduce an innovative and user-oriented EO-based web service for landslide mapping and monitoring. Three central design components of the service are presented: (1) the user requirements definition, (2) the semi-automated image analysis methods implemented in the service, and (3) the web mapping application with its responsive user interface. User requirements were gathered during semi-structured interviews with regional authorities. The potential users were asked if and how they employ remote sensing data for landslide investigation and what their expectations to a landslide web mapping service regarding reliability and usability are. The interviews revealed the capability of our service for landslide documentation and mapping as well as monitoring of selected landslide sites, for example to complete and update landslide inventory maps. In addition, the users see a considerable potential for landslide rapid mapping. The user requirements analysis served as basis for the service concept definition. Optical satellite imagery from different high resolution (HR) and very high

  20. ChEMBL web services: streamlining access to drug discovery data and utilities.

    PubMed

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Lynx web services for annotations and systems analysis of multi-gene disorders.

    PubMed

    Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia

    2014-07-01

    Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Spatial Pattern of Standing Timber Value across the Brazilian Amazon

    PubMed Central

    Ahmed, Sadia E.; Ewers, Robert M.

    2012-01-01

    The Amazon is a globally important system, providing a host of ecosystem services from climate regulation to food sources. It is also home to a quarter of all global diversity. Large swathes of forest are removed each year, and many models have attempted to predict the spatial patterns of this forest loss. The spatial patterns of deforestation are determined largely by the patterns of roads that open access to frontier areas and expansion of the road network in the Amazon is largely determined by profit seeking logging activities. Here we present predictions for the spatial distribution of standing value of timber across the Amazon. We show that the patterns of timber value reflect large-scale ecological gradients, determining the spatial distribution of functional traits of trees which are, in turn, correlated with timber values. We expect that understanding the spatial patterns of timber value across the Amazon will aid predictions of logging movements and thus predictions of potential future road developments. These predictions in turn will be of great use in estimating the spatial patterns of deforestation in this globally important biome. PMID:22590520

  3. The impact of a state-sponsored mass media campaign on use of telephone quitline and web-based cessation services.

    PubMed

    Duke, Jennifer C; Mann, Nathan; Davis, Kevin C; MacMonegle, Anna; Allen, Jane; Porter, Lauren

    2014-12-24

    Most US smokers do not use evidence-based interventions as part of their quit attempts. Quitlines and Web-based treatments may contribute to reductions in population-level tobacco use if successfully promoted. Currently, few states implement sustained media campaigns to promote services and increase adult smoking cessation. This study examines the effects of Florida's tobacco cessation media campaign and a nationally funded media campaign on telephone quitline and Web-based registrations for cessation services from November 2010 through September 2013. We conducted multivariable analyses of weekly media-market-level target rating points (TRPs) and weekly registrations for cessation services through the Florida Quitline (1-877-U-CAN-NOW) or its Web-based cessation service, Web Coach (www.quitnow.net/florida). During 35 months, 141,221 tobacco users registered for cessation services through the Florida Quitline, and 53,513 registered through Web Coach. An increase in 100 weekly TRPs was associated with an increase of 7 weekly Florida Quitline registrants (β = 6.8, P < .001) and 2 Web Coach registrants (β = 1.7, P = .003) in an average media market. An increase in TRPs affected registrants from multiple demographic subgroups similarly. When state and national media campaigns aired simultaneously, approximately one-fifth of Florida's Quitline registrants came from the nationally advertised portal (1-800-QUIT-NOW). Sustained, state-sponsored media can increase the number of registrants to telephone quitlines and Web-based cessation services. Federally funded media campaigns can further increase the reach of state-sponsored cessation services.

  4. Data Intensive Computing on Amazon Web Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, S. A.

    The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less

  5. XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services.

    PubMed

    Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl E S

    2009-09-04

    Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.

  6. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  7. Decentralized Orchestration of Composite Ogc Web Processing Services in the Cloud

    NASA Astrophysics Data System (ADS)

    Xiao, F.; Shea, G. Y. K.; Cao, J.

    2016-09-01

    Current web-based GIS or RS applications generally rely on centralized structure, which has inherent drawbacks such as single points of failure, network congestion, and data inconsistency, etc. The inherent disadvantages of traditional GISs need to be solved for new applications on Internet or Web. Decentralized orchestration offers performance improvements in terms of increased throughput and scalability and lower response time. This paper investigates build time and runtime issues related to decentralized orchestration of composite geospatial processing services based on OGC WPS standard specification. A case study of dust storm detection was demonstrated to evaluate the proposed method and the experimental results indicate that the method proposed in this study is effective for its ability to produce the high quality solution at a low cost of communications for geospatial processing service composition problem.

  8. Technical Note: Harmonizing met-ocean model data via standard web services within small research groups

    USGS Publications Warehouse

    Signell, Richard; Camossi, E.

    2016-01-01

    Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.

  9. Thomson Scientific's expanding Web of Knowledge: beyond citation databases and current awareness services.

    PubMed

    London, Sue; Brahmi, Frances A

    2005-01-01

    As end-user demand for easy access to electronic full text continues to climb, an increasing number of information providers are combining that access with their other products and services, making navigating their Web sites by librarians seeking information on a given product or service more daunting than ever. One such provider of a complex array of products and services is Thomson Scientific. This paper looks at some of the many products and tools available from two of Thomson Scientific's businesses, Thomson ISI and Thomson ResearchSoft. Among the items of most interest to health sciences and veterinary librarians and their users are the variety of databases available via the ISI Web of Knowledge platform and the information management products available from ResearchSoft.

  10. Exploring Students' Perceptions of Service-Learning Experiences in an Undergraduate Web Design Course

    ERIC Educational Resources Information Center

    Lee, Sang Joon; Wilder, Charlie; Yu, Chien

    2018-01-01

    Service-learning is an experiential learning experience where students learn and develop through active participation in community service to meet the needs of a community. This study explored student learning experiences in a service-learning group project and their perceptions of service-learning in an undergraduate web design course. The data…

  11. Cloud Computing Technologies Facilitate Earth Research

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a Space Act Agreement, NASA partnered with Seattle-based Amazon Web Services to make the agency's climate and Earth science satellite data publicly available on the company's servers. Users can access the data for free, but they can also pay to use Amazon's computing services to analyze and visualize information using the same software available to NASA researchers.

  12. A web-based information system for a regional public mental healthcare service network in Brazil.

    PubMed

    Yoshiura, Vinicius Tohoru; de Azevedo-Marques, João Mazzoncini; Rzewuska, Magdalena; Vinci, André Luiz Teixeira; Sasso, Ariane Morassi; Miyoshi, Newton Shydeo Brandão; Furegato, Antonia Regina Ferreira; Rijo, Rui Pedro Charters Lopes; Del-Ben, Cristina Marta; Alves, Domingos

    2017-01-01

    Regional networking between services that provide mental health care in Brazil's decentralized public health system is challenging, partly due to the simultaneous existence of services managed by municipal and state authorities and a lack of efficient and transparent mechanisms for continuous and updated communication between them. Since 2011, the Ribeirao Preto Medical School and the XIII Regional Health Department of the Sao Paulo state, Brazil, have been developing and implementing a web-based information system to facilitate an integrated care throughout a public regional mental health care network. After a profound on-site analysis, the structure of the network was identified and a web-based information system for psychiatric admissions and discharges was developed and implemented using a socio-technical approach. An information technology team liaised with mental health professionals, health-service managers, municipal and state health secretariats and judicial authorities. Primary care, specialized community services, general emergency and psychiatric wards services, that comprise the regional mental healthcare network, were identified and the system flow was delineated. The web-based system overcame the fragmentation of the healthcare system and addressed service specific needs, enabling: detailed patient information sharing; active coordination of the processes of psychiatric admissions and discharges; real-time monitoring; the patients' status reports; the evaluation of the performance of each service and the whole network. During a 2-year period of operation, it registered 137 services, 480 health care professionals and 4271 patients, with a mean number of 2835 accesses per month. To date the system is successfully operating and further expanding. We have successfully developed and implemented an acceptable, useful and transparent web-based information system for a regional mental healthcare service network in a medium-income country with a decentralized

  13. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  14. Web Based Rapid Mapping of Disaster Areas using Satellite Images, Web Processing Service, Web Mapping Service, Frequency Based Change Detection Algorithm and J-iView

    NASA Astrophysics Data System (ADS)

    Bandibas, J. C.; Takarada, S.

    2013-12-01

    Timely identification of areas affected by natural disasters is very important for a successful rescue and effective emergency relief efforts. This research focuses on the development of a cost effective and efficient system of identifying areas affected by natural disasters, and the efficient distribution of the information. The developed system is composed of 3 modules which are the Web Processing Service (WPS), Web Map Service (WMS) and the user interface provided by J-iView (fig. 1). WPS is an online system that provides computation, storage and data access services. In this study, the WPS module provides online access of the software implementing the developed frequency based change detection algorithm for the identification of areas affected by natural disasters. It also sends requests to WMS servers to get the remotely sensed data to be used in the computation. WMS is a standard protocol that provides a simple HTTP interface for requesting geo-registered map images from one or more geospatial databases. In this research, the WMS component provides remote access of the satellite images which are used as inputs for land cover change detection. The user interface in this system is provided by J-iView, which is an online mapping system developed at the Geological Survey of Japan (GSJ). The 3 modules are seamlessly integrated into a single package using J-iView, which could rapidly generate a map of disaster areas that is instantaneously viewable online. The developed system was tested using ASTER images covering the areas damaged by the March 11, 2011 tsunami in northeastern Japan. The developed system efficiently generated a map showing areas devastated by the tsunami. Based on the initial results of the study, the developed system proved to be a useful tool for emergency workers to quickly identify areas affected by natural disasters.

  15. TnpPred: A Web Service for the Robust Prediction of Prokaryotic Transposases

    PubMed Central

    Riadi, Gonzalo; Medina-Moenne, Cristobal; Holmes, David S.

    2012-01-01

    Transposases (Tnps) are enzymes that participate in the movement of insertion sequences (ISs) within and between genomes. Genes that encode Tnps are amongst the most abundant and widely distributed genes in nature. However, they are difficult to predict bioinformatically and given the increasing availability of prokaryotic genomes and metagenomes, it is incumbent to develop rapid, high quality automatic annotation of ISs. This need prompted us to develop a web service, termed TnpPred for Tnp discovery. It provides better sensitivity and specificity for Tnp predictions than given by currently available programs as determined by ROC analysis. TnpPred should be useful for improving genome annotation. The TnpPred web service is freely available for noncommercial use. PMID:23251097

  16. Development and process evaluation of a Web-based responsible beverage service training program.

    PubMed

    Danaher, Brian G; Dresser, Jack; Shaw, Tracy; Severson, Herbert H; Tyler, Milagra S; Maxwell, Elisabeth D; Christiansen, Steve M

    2012-09-22

    Responsible beverage service (RBS) training designed to improve the appropriate service of alcohol in commercial establishments is typically delivered in workshops. Recently, Web-based RBS training programs have emerged. This report describes the formative development and subsequent design of an innovative Web-delivered RBS program, and evaluation of the impact of the program on servers' knowledge, attitudes, and self-efficacy. Formative procedures using focus groups and usability testing were used to develop a Web-based RBS training program. Professional alcohol servers (N = 112) who worked as servers and/or mangers in alcohol service settings were recruited to participate. A pre-post assessment design was used to assess changes associated with using the program. Participants who used the program showed significant improvements in their RBS knowledge, attitudes, and self-efficacy. Although the current study did not directly observe and determine impact of the intervention on server behaviors, it demonstrated that the development process incorporating input from a multidisciplinary team in conjunction with feedback from end-users resulted in creation of a Web-based RBS program that was well-received by servers and that changed relevant knowledge, attitudes, and self-efficacy. The results also help to establish a needed evidence base in support of the use of online RBS training, which has been afforded little research attention.

  17. Development and process evaluation of a web-based responsible beverage service training program

    PubMed Central

    2012-01-01

    Background Responsible beverage service (RBS) training designed to improve the appropriate service of alcohol in commercial establishments is typically delivered in workshops. Recently, Web-based RBS training programs have emerged. This report describes the formative development and subsequent design of an innovative Web-delivered RBS program, and evaluation of the impact of the program on servers’ knowledge, attitudes, and self-efficacy. Methods Formative procedures using focus groups and usability testing were used to develop a Web-based RBS training program. Professional alcohol servers (N = 112) who worked as servers and/or mangers in alcohol service settings were recruited to participate. A pre-post assessment design was used to assess changes associated with using the program. Results Participants who used the program showed significant improvements in their RBS knowledge, attitudes, and self-efficacy. Conclusions Although the current study did not directly observe and determine impact of the intervention on server behaviors, it demonstrated that the development process incorporating input from a multidisciplinary team in conjunction with feedback from end-users resulted in creation of a Web-based RBS program that was well-received by servers and that changed relevant knowledge, attitudes, and self-efficacy. The results also help to establish a needed evidence base in support of the use of online RBS training, which has been afforded little research attention. PMID:22999419

  18. Pilot Evaluation of a Web-Based Intervention Targeting Sexual Health Service Access

    ERIC Educational Resources Information Center

    Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.

    2016-01-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…

  19. The Impact of a State-Sponsored Mass Media Campaign on Use of Telephone Quitline and Web-Based Cessation Services

    PubMed Central

    Mann, Nathan; Davis, Kevin C.; MacMonegle, Anna; Allen, Jane; Porter, Lauren

    2014-01-01

    Introduction Most US smokers do not use evidence-based interventions as part of their quit attempts. Quitlines and Web-based treatments may contribute to reductions in population-level tobacco use if successfully promoted. Currently, few states implement sustained media campaigns to promote services and increase adult smoking cessation. This study examines the effects of Florida’s tobacco cessation media campaign and a nationally funded media campaign on telephone quitline and Web-based registrations for cessation services from November 2010 through September 2013. Methods We conducted multivariable analyses of weekly media-market–level target rating points (TRPs) and weekly registrations for cessation services through the Florida Quitline (1-877-U-CAN-NOW) or its Web-based cessation service, Web Coach (www.quitnow.net/florida). Results During 35 months, 141,221 tobacco users registered for cessation services through the Florida Quitline, and 53,513 registered through Web Coach. An increase in 100 weekly TRPs was associated with an increase of 7 weekly Florida Quitline registrants (β = 6.8, P < .001) and 2 Web Coach registrants (β = 1.7, P = .003) in an average media market. An increase in TRPs affected registrants from multiple demographic subgroups similarly. When state and national media campaigns aired simultaneously, approximately one-fifth of Florida’s Quitline registrants came from the nationally advertised portal (1-800-QUIT-NOW). Conclusion Sustained, state-sponsored media can increase the number of registrants to telephone quitlines and Web-based cessation services. Federally funded media campaigns can further increase the reach of state-sponsored cessation services. PMID:25539129

  20. The Footprint Database and Web Services of the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data

  1. Optimizing the Information Presentation on Mining Potential by using Web Services Technology with Restful Protocol

    NASA Astrophysics Data System (ADS)

    Abdillah, T.; Dai, R.; Setiawan, E.

    2018-02-01

    This study aims to develop the application of Web Services technology with RestFul Protocol to optimize the information presentation on mining potential. This study used User Interface Design approach for the information accuracy and relevance as well as the Web Service for the reliability in presenting the information. The results show that: the information accuracy and relevance regarding mining potential can be seen from the achievement of User Interface implementation in the application that is based on the following rules: The consideration of the appropriate colours and objects, the easiness of using the navigation, and users’ interaction with the applications that employs symbols and languages understood by the users; the information accuracy and relevance related to mining potential can be observed by the information presented by using charts and Tool Tip Text to help the users understand the provided chart/figure; the reliability of the information presentation is evident by the results of Web Services testing in Figure 4.5.6. This study finds out that User Interface Design and Web Services approaches (for the access of different Platform apps) are able to optimize the presentation. The results of this study can be used as a reference for software developers and Provincial Government of Gorontalo.

  2. Perspectives for Web Service Intermediaries: How Influence on Quality Makes the Difference

    NASA Astrophysics Data System (ADS)

    Scholten, Ulrich; Fischer, Robin; Zirpins, Christian

    In the service-oriented computing paradigm and the Web service architecture, the broker role is a key facilitator to leverage technical capabilities of loose coupling to achieve organizational capabilities of dynamic customer-provider-relationships. In practice, this role has quickly evolved into a variety of intermediary concepts that refine and extend the basic functionality of service brokerage with respect to various forms of added value like platform or market mechanisms. While this has initially led to a rich variety of Web service intermediaries, many of these are now going through a phase of stagnation or even decline in customer acceptance. In this paper we present a comparative study on insufficient service quality that is arguably one of the key reasons for this phenomenon. In search of a differentiation with respect to quality monitoring and management patterns, we categorize intermediaries into Infomediaries, e-Hubs, e-Markets and Integrators. A mapping of quality factors and control mechanisms to these categories depicts their respective strengths and weaknesses. The results show that Integrators have the highest overall performance, followed by e-Markets, e-Hubs and lastly Infomediaries. A comparative market survey confirms the conceptual findings.

  3. XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services

    PubMed Central

    Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl ES

    2009-01-01

    Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics. PMID:19732427

  4. Design and Development of a Framework Based on Ogc Web Services for the Visualization of Three Dimensional Large-Scale Geospatial Data Over the Web

    NASA Astrophysics Data System (ADS)

    Roccatello, E.; Nozzi, A.; Rumor, M.

    2013-05-01

    This paper illustrates the key concepts behind the design and the development of a framework, based on OGC services, capable to visualize 3D large scale geospatial data streamed over the web. WebGISes are traditionally bounded to a bi-dimensional simplified representation of the reality and though they are successfully addressing the lack of flexibility and simplicity of traditional desktop clients, a lot of effort is still needed to reach desktop GIS features, like 3D visualization. The motivations behind this work lay in the widespread availability of OGC Web Services inside government organizations and in the technology support to HTML 5 and WebGL standard of the web browsers. This delivers an improved user experience, similar to desktop applications, therefore allowing to augment traditional WebGIS features with a 3D visualization framework. This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.

  5. OneGeology-Europe: architecture, portal and web services to provide a European geological map

    NASA Astrophysics Data System (ADS)

    Tellez-Arenas, Agnès.; Serrano, Jean-Jacques; Tertre, François; Laxton, John

    2010-05-01

    OneGeology-Europe is a large ambitious project to make geological spatial data further known and accessible. The OneGeology-Europe project develops an integrated system of data to create and make accessible for the first time through the internet the geological map of the whole of Europe. The architecture implemented by the project is web services oriented, based on the OGC standards: the geological map is not a centralized database but is composed by several web services, each of them hosted by a European country involved in the project. Since geological data are elaborated differently from country to country, they are difficult to share. OneGeology-Europe, while providing more detailed and complete information, will foster even beyond the geological community an easier exchange of data within Europe and globally. This implies an important work regarding the harmonization of the data, both model and the content. OneGeology-Europe is characterised by the high technological capacity of the EU Member States, and has the final goal to achieve the harmonisation of European geological survey data according to common standards. As a direct consequence Europe will make a further step in terms of innovation and information dissemination, continuing to play a world leading role in the development of geosciences information. The scope of the common harmonized data model was defined primarily by the requirements of the geological map of Europe, but in addition users were consulted and the requirements of both INSPIRE and ‘high-resolution' geological maps were considered. The data model is based on GeoSciML, developed since 2006 by a group of Geological Surveys. The data providers involved in the project implemented a new component that allows the web services to deliver the geological map expressed into GeoSciML. In order to capture the information describing the geological units of the map of Europe the scope of the data model needs to include lithology; age; genesis and

  6. Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service

    NASA Astrophysics Data System (ADS)

    Nonogaki, S.; Nemoto, T.

    2014-12-01

    Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.

  7. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  8. JABAWS 2.2 distributed web services for Bioinformatics: protein disorder, conservation and RNA secondary structure.

    PubMed

    Troshin, Peter V; Procter, James B; Sherstnev, Alexander; Barton, Daniel L; Madeira, Fábio; Barton, Geoffrey J

    2018-06-01

    JABAWS 2.2 is a computational framework that simplifies the deployment of web services for Bioinformatics. In addition to the five multiple sequence alignment (MSA) algorithms in JABAWS 1.0, JABAWS 2.2 includes three additional MSA programs (Clustal Omega, MSAprobs, GLprobs), four protein disorder prediction methods (DisEMBL, IUPred, Ronn, GlobPlot), 18 measures of protein conservation as implemented in AACon, and RNA secondary structure prediction by the RNAalifold program. JABAWS 2.2 can be deployed on a variety of in-house or hosted systems. JABAWS 2.2 web services may be accessed from the Jalview multiple sequence analysis workbench (Version 2.8 and later), as well as directly via the JABAWS command line interface (CLI) client. JABAWS 2.2 can be deployed on a local virtual server as a Virtual Appliance (VA) or simply as a Web Application Archive (WAR) for private use. Improvements in JABAWS 2.2 also include simplified installation and a range of utility tools for usage statistics collection, and web services querying and monitoring. The JABAWS CLI client has been updated to support all the new services and allow integration of JABAWS 2.2 services into conventional scripts. A public JABAWS 2 server has been in production since December 2011 and served over 800 000 analyses for users worldwide. JABAWS 2.2 is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws. g.j.barton@dundee.ac.uk.

  9. Weaving Silos--A Leadership Challenge: A Cross-Functional Team Approach to Supporting Web-Based Student Services

    ERIC Educational Resources Information Center

    Kleemann, Gary L.

    2005-01-01

    The author reviews the evolution of Web services--from information sharing to transactional to relationship building--and the progression from first-generation to fourth-generation Web sites. (Contains 3 figures.)

  10. Freshman Admissions Predictor: An Interactive Self-Help Web Counseling Service

    ERIC Educational Resources Information Center

    Head, Joe F.; Hughes, Thomas M.

    2004-01-01

    Colleges and universities must seek or develop the most competitive enrollment management tools in order to reach and admit qualified students. However, institutions that utilize transactional Web features are more effective if they can personalize services by providing useful customized information in real time for the prospect. Well crafted high…

  11. A Design of a Network Model to the Electric Power Trading System Using Web Services

    NASA Astrophysics Data System (ADS)

    Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki; Kitayama, Masashi; Izumi, Yoshio

    Web services are regarded as a new application paradigm in the world of the Internet. On the other hand, many business models of a power trading system has been proposed to aim at load reduction by consumers cooperating with electric power suppliers in an electric power market. Then, we propose a network model of power trading system using Web service in this paper. The adaptability of Web services to power trading system was checked in the prototype of our network model and we got good results for it. Each server provides functions as a SOAP server, and it is coupled loosely with each other through SOAP. Storing SOAP message in HTTP packet can establish the penetration communication way that is not conscious of a firewall. Switching of a dynamic server is possible by means of rewriting the server point information on WSDL at the time of obstacle generating.

  12. Web-Based Self-Service Systems for Managed IT Support: Service Provider Perspectives of Stakeholder-Based Issues

    NASA Astrophysics Data System (ADS)

    Cooper, Vanessa A.; Lichtenstein, Sharman; Smith, Ross

    This chapter explores the provision of after-sales information technology (IT) support services using Web-based self-service systems (WSSs) in a business-to-business (B2B) context. A recent study conducted at six large multi-national IT support organisations revealed a number of critical success factors (CSFs) and stakeholder-based issues. To better identify and understand these important enablers and barriers, we explain how WSSs should be considered within a complex network of service providers, business partners and customer firms. The CSFs and stakeholder-based issues are discussed. The chapter highlights that for more successful service provision using WSSs, IT service providers should collaborate more effectively with enterprise customers and business partners and should better integrate their WSSs.

  13. 77 FR 21973 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... location: Delete entry and replace with ``Amazon Web Services, LLC 13461 Sunrise Valley Drive, Herndon, VA.../JS Privacy Office, Freedom of Information Directorate, Washington Headquarters Services, 1155 Defense..., protocols and/or in briefings of the consequences of improper access or use of the data. The web-based files...

  14. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2015-11-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin during two years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the Introduction to the GoAmazon2014/5 Special Issue, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the two-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean

  15. Introduction: Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5)

    NASA Astrophysics Data System (ADS)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; Manzi, A. O.; Souza, R. A. F.; Schumacher, C.; Wang, J.; Andreae, M. O.; Barbosa, H. M. J.; Fan, J.; Fisch, G.; Goldstein, A. H.; Guenther, A.; Jimenez, J. L.; Pöschl, U.; Silva Dias, M. A.; Smith, J. N.; Wendisch, M.

    2016-04-01

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from the Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. The G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs also correspond to the clean and

  16. Soil food web properties explain ecosystem services across European land use systems.

    PubMed

    de Vries, Franciska T; Thébault, Elisa; Liiri, Mira; Birkhofer, Klaus; Tsiafouli, Maria A; Bjørnlund, Lisa; Bracht Jørgensen, Helene; Brady, Mark Vincent; Christensen, Søren; de Ruiter, Peter C; d'Hertefeldt, Tina; Frouz, Jan; Hedlund, Katarina; Hemerik, Lia; Hol, W H Gera; Hotes, Stefan; Mortimer, Simon R; Setälä, Heikki; Sgardelis, Stefanos P; Uteseny, Karoline; van der Putten, Wim H; Wolters, Volkmar; Bardgett, Richard D

    2013-08-27

    Intensive land use reduces the diversity and abundance of many soil biota, with consequences for the processes that they govern and the ecosystem services that these processes underpin. Relationships between soil biota and ecosystem processes have mostly been found in laboratory experiments and rarely are found in the field. Here, we quantified, across four countries of contrasting climatic and soil conditions in Europe, how differences in soil food web composition resulting from land use systems (intensive wheat rotation, extensive rotation, and permanent grassland) influence the functioning of soils and the ecosystem services that they deliver. Intensive wheat rotation consistently reduced the biomass of all components of the soil food web across all countries. Soil food web properties strongly and consistently predicted processes of C and N cycling across land use systems and geographic locations, and they were a better predictor of these processes than land use. Processes of carbon loss increased with soil food web properties that correlated with soil C content, such as earthworm biomass and fungal/bacterial energy channel ratio, and were greatest in permanent grassland. In contrast, processes of N cycling were explained by soil food web properties independent of land use, such as arbuscular mycorrhizal fungi and bacterial channel biomass. Our quantification of the contribution of soil organisms to processes of C and N cycling across land use systems and geographic locations shows that soil biota need to be included in C and N cycling models and highlights the need to map and conserve soil biodiversity across the world.

  17. Soil food web properties explain ecosystem services across European land use systems

    PubMed Central

    de Vries, Franciska T.; Thébault, Elisa; Liiri, Mira; Birkhofer, Klaus; Tsiafouli, Maria A.; Bjørnlund, Lisa; Bracht Jørgensen, Helene; Brady, Mark Vincent; Christensen, Søren; de Ruiter, Peter C.; d’Hertefeldt, Tina; Frouz, Jan; Hedlund, Katarina; Hemerik, Lia; Hol, W. H. Gera; Hotes, Stefan; Mortimer, Simon R.; Setälä, Heikki; Sgardelis, Stefanos P.; Uteseny, Karoline; van der Putten, Wim H.; Wolters, Volkmar; Bardgett, Richard D.

    2013-01-01

    Intensive land use reduces the diversity and abundance of many soil biota, with consequences for the processes that they govern and the ecosystem services that these processes underpin. Relationships between soil biota and ecosystem processes have mostly been found in laboratory experiments and rarely are found in the field. Here, we quantified, across four countries of contrasting climatic and soil conditions in Europe, how differences in soil food web composition resulting from land use systems (intensive wheat rotation, extensive rotation, and permanent grassland) influence the functioning of soils and the ecosystem services that they deliver. Intensive wheat rotation consistently reduced the biomass of all components of the soil food web across all countries. Soil food web properties strongly and consistently predicted processes of C and N cycling across land use systems and geographic locations, and they were a better predictor of these processes than land use. Processes of carbon loss increased with soil food web properties that correlated with soil C content, such as earthworm biomass and fungal/bacterial energy channel ratio, and were greatest in permanent grassland. In contrast, processes of N cycling were explained by soil food web properties independent of land use, such as arbuscular mycorrhizal fungi and bacterial channel biomass. Our quantification of the contribution of soil organisms to processes of C and N cycling across land use systems and geographic locations shows that soil biota need to be included in C and N cycling models and highlights the need to map and conserve soil biodiversity across the world. PMID:23940339

  18. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability

    PubMed Central

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-01-01

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759

  19. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.

    PubMed

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-08-31

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.

  20. Motivating Pre-Service Teachers in Technology Integration of Web 2.0 for Teaching Internships

    ERIC Educational Resources Information Center

    Kim, Hye Jeong; Jang, Hwan Young

    2015-01-01

    The aim of this study was to examine the predictors of pre-service teachers' use of Web 2.0 tools during a teaching internship, after a course that emphasized the use of the tools for instructional activities. Results revealed that integrating Web 2.0 tools during their teaching internship was strongly predicted by participants' perceived…

  1. Using Forecasting to Predict Long-Term Resource Utilization for Web Services

    ERIC Educational Resources Information Center

    Yoas, Daniel W.

    2013-01-01

    Researchers have spent years understanding resource utilization to improve scheduling, load balancing, and system management through short-term prediction of resource utilization. Early research focused primarily on single operating systems; later, interest shifted to distributed systems and, finally, into web services. In each case researchers…

  2. Electronic Resources for Youth Services: A Print Bibliography and Web Site.

    ERIC Educational Resources Information Center

    Amey, Larry; Segal, Erez

    1996-01-01

    This article evaluates 57 World Wide Web sites related to children's literature and youth-oriented library services, in categories including award-winning books; book reviews; reading and storytelling; writing resources; online children's literature; educational entertainment; and authors, publishers, and booksellers. Also included is information…

  3. QoS prediction for web services based on user-trust propagation model

    NASA Astrophysics Data System (ADS)

    Thinh, Le-Van; Tu, Truong-Dinh

    2017-10-01

    There is an important online role for Web service providers and users; however, the rapidly growing number of service providers and users, it can create some similar functions among web services. This is an exciting area for research, and researchers seek to to propose solutions for the best service to users. Collaborative filtering (CF) algorithms are widely used in recommendation systems, although these are less effective for cold-start users. Recently, some recommender systems have been developed based on social network models, and the results show that social network models have better performance in terms of CF, especially for cold-start users. However, most social network-based recommendations do not consider the user's mood. This is a hidden source of information, and is very useful in improving prediction efficiency. In this paper, we introduce a new model called User-Trust Propagation (UTP). The model uses a combination of trust and the mood of users to predict the QoS value and matrix factorisation (MF), which is used to train the model. The experimental results show that the proposed model gives better accuracy than other models, especially for the cold-start problem.

  4. Service-oriented model-encapsulation strategy for sharing and integrating heterogeneous geo-analysis models in an open web environment

    NASA Astrophysics Data System (ADS)

    Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian

    2016-04-01

    Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and

  5. Web-based data delivery services in support of disaster-relief applications

    USGS Publications Warehouse

    Jones, Brenda K.; Risty, Ron R.; Buswell, M.

    2003-01-01

    The U.S. Geological Survey Earth Resources Observation Systems Data Center responds to emergencies in support of various government agencies for human-induced and natural disasters. This response consists of satellite tasking and acquisitions, satellite image registrations, disaster-extent maps analysis and creation, base image provision and support, Web-based mapping services for product delivery, and predisaster and postdisaster data archiving. The emergency response staff are on call 24 hours a day, 7 days a week, and have access to many commercial and government satellite and aerial photography tasking authorities. They have access to value-added data processing and photographic laboratory services for off-hour emergency requests. They work with various Federal agencies for preparedness planning, which includes providing base imagery. These data may include digital elevation models, hydrographic models, base satellite images, vector data layers such as roads, aerial photographs, and other predisaster data. These layers are incorporated into a Web-based browser and data delivery service that is accessible either to the general public or to select customers. As usage declines, the data are moved to a postdisaster nearline archive that is still accessible, but not in real time.

  6. A web application to support telemedicine services in Brazil.

    PubMed

    Barbosa, Ana Karina P; de A Novaes, Magdala; de Vasconcelos, Alexandre M L

    2003-01-01

    This paper describes a system that has been developed to support Telemedicine activities in Brazil, a country that has serious problems in the delivery of health services. The system is a part of the broader Tele-health Project that has been developed to make health services more accessible to the low-income population in the northeast region. The HealthNet system is based upon a pilot area that uses fetal and pediatric cardiology. This article describes both the system's conceptual model, including the tele-diagnosis and second medical opinion services, as well as its architecture and development stages. The system model describes both collaborating tools used asynchronously, such as discussion forums, and synchronous tools, such as videoconference services. Web and free-of-charge tools are utilized for implementation, such as Java and MySQL database. Furthermore, an interface with Electronic Patient Record (EPR) systems using Extended Markup Language (XML) technology is also proposed. Finally, considerations concerning the development and implementation process are presented.

  7. Revisiting the hierarchy of urban areas in the Brazilian Amazon: a multilevel approach

    PubMed Central

    Costa, Sandra; Brondízio, Eduardo

    2012-01-01

    The Legal Brazilian Amazon, while the largest rainforest in the world, is also a region where most residents are urban. Despite close linkages between rural and urban processes in the region, rural areas have been the predominant focus of Amazon-based population-environment scholarship. Offering a focus on urban areas within the Brazilian Amazon, this paper examines the emergence of urban hierarchies within the region. Using a combination of nationally representative data and community based surveys, applied to a multivariate cluster methodology (Grade of Membership), we observe the emergence of sub-regional urban networks characterized by economic and political inter-dependency, population movement, and provision of services. These networks link rural areas, small towns, and medium and large cities. We also identify the emergence of medium-size cities as important nodes at a sub-regional level. In all, the work provides insight on the proposed model of ‘disarticulated urbanization’ within the Amazon by calling attention to the increasing role of regional and sub-regional urban networks in shaping the future expansion of land use and population distribution in the Amazon. We conclude with a discussion of implications for increasing intra-regional connectivity and fragmentation of conservation areas and ecosystems in the region. PMID:23129877

  8. Co-creating and Evaluating a Web-app Mapping Real-World Health Care Services for Students: The servi-Share Protocol

    PubMed Central

    Langlois, Emmanuel; Wittwer, Jérôme; Tzourio, Christophe

    2017-01-01

    Background University students aged 18-30 years are a population group reporting low access to health care services, with high rates of avoidance and delay of medical care. This group also reports not having appropriate information about available health care services. However, university students are at risk for several health problems, and regular medical consultations are recommended in this period of life. New digital devices are popular among the young, and Web-apps can be used to facilitate easy access to information regarding health care services. A small number of electronic health (eHealth) tools have been developed with the purpose of displaying real-world health care services, and little is known about how such eHealth tools can improve access to care. Objective This paper describes the processes of co-creating and evaluating the beta version of a Web-app aimed at mapping and describing free or low-cost real-world health care services available in the Bordeaux area of France, which is specifically targeted to university students. Methods The co-creation process involves: (1) exploring the needs of students to know and access real-world health care services; (2) identifying the real-world health care services of interest for students; and (3) deciding on a user interface, and developing the beta version of the Web-app. Finally, the evaluation process involves: (1) testing the beta version of the Web-app with the target audience (university students aged 18-30 years); (2) collecting their feedback via a satisfaction survey; and (3) planning a long-term evaluation. Results The co-creation process of the beta version of the Web-app was completed in August 2016 and is described in this paper. The evaluation process started on September 7, 2016. The project was completed in December 2016 and implementation of the Web-app is ongoing. Conclusions Web-apps are an innovative way to increase the health literacy of young people in terms of delivery of and access to

  9. Load Index Metrics for an Optimized Management of Web Services: A Systematic Evaluation

    PubMed Central

    Souza, Paulo S. L.; Santana, Regina H. C.; Santana, Marcos J.; Zaluska, Ed; Faical, Bruno S.; Estrella, Julio C.

    2013-01-01

    The lack of precision to predict service performance through load indices may lead to wrong decisions regarding the use of web services, compromising service performance and raising platform cost unnecessarily. This paper presents experimental studies to qualify the behaviour of load indices in the web service context. The experiments consider three services that generate controlled and significant server demands, four levels of workload for each service and six distinct execution scenarios. The evaluation considers three relevant perspectives: the capability for representing recent workloads, the capability for predicting near-future performance and finally stability. Eight different load indices were analysed, including the JMX Average Time index (proposed in this paper) specifically designed to address the limitations of the other indices. A systematic approach is applied to evaluate the different load indices, considering a multiple linear regression model based on the stepwise-AIC method. The results show that the load indices studied represent the workload to some extent; however, in contrast to expectations, most of them do not exhibit a coherent correlation with service performance and this can result in stability problems. The JMX Average Time index is an exception, showing a stable behaviour which is tightly-coupled to the service runtime for all executions. Load indices are used to predict the service runtime and therefore their inappropriate use can lead to decisions that will impact negatively on both service performance and execution cost. PMID:23874776

  10. Experimental evaluation of the impact of packet capturing tools for web services.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choe, Yung Ryn; Mohapatra, Prasant; Chuah, Chen-Nee

    Network measurement is a discipline that provides the techniques to collect data that are fundamental to many branches of computer science. While many capturing tools and comparisons have made available in the literature and elsewhere, the impact of these packet capturing tools on existing processes have not been thoroughly studied. While not a concern for collection methods in which dedicated servers are used, many usage scenarios of packet capturing now requires the packet capturing tool to run concurrently with operational processes. In this work we perform experimental evaluations of the performance impact that packet capturing process have on web-based services;more » in particular, we observe the impact on web servers. We find that packet capturing processes indeed impact the performance of web servers, but on a multi-core system the impact varies depending on whether the packet capturing and web hosting processes are co-located or not. In addition, the architecture and behavior of the web server and process scheduling is coupled with the behavior of the packet capturing process, which in turn also affect the web server's performance.« less

  11. QaaS (quality as a service) model for web services using big data technologies

    NASA Astrophysics Data System (ADS)

    Ahmad, Faisal; Sarkar, Anirban

    2017-10-01

    Quality of service (QoS) determines the service usability and utility and both of which influence the service selection process. The QoS varies from one service provider to other. Each web service has its own methodology for evaluating QoS. The lack of transparent QoS evaluation model makes the service selection challenging. Moreover, most QoS evaluation processes do not consider their historical data which not only helps in getting more accurate QoS but also helps for future prediction, recommendation and knowledge discovery. QoS driven service selection demands a model where QoS can be provided as a service to end users. This paper proposes a layered QaaS (quality as a service) model in the same line as PaaS and software as a service, where users can provide QoS attributes as inputs and the model returns services satisfying the user's QoS expectation. This paper covers all the key aspects in this context, like selection of data sources, its transformation, evaluation, classification and storage of QoS. The paper uses server log as the source for evaluating QoS values, common methodology for its evaluation and big data technologies for its transformation and analysis. This paper also establishes the fact that Spark outperforms the Pig with respect to evaluation of QoS from logs.

  12. Introduction: Observations and modeling of the Green Ocean Amazon (GoAmazon2014/5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from themore » Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. In addition, the G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs

  13. Introduction: Observations and modeling of the Green Ocean Amazon (GoAmazon2014/5)

    DOE PAGES

    Martin, S. T.; Artaxo, P.; Machado, L. A. T.; ...

    2016-04-19

    The Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) Experiment was carried out in the environs of Manaus, Brazil, in the central region of the Amazon basin for 2 years from 1 January 2014 through 31 December 2015. The experiment focused on the complex interactions among vegetation, atmospheric chemistry, and aerosol production on the one hand and their connections to aerosols, clouds, and precipitation on the other. The objective was to understand and quantify these linked processes, first under natural conditions to obtain a baseline and second when altered by the effects of human activities. To this end, the pollution plume from themore » Manaus metropolis, superimposed on the background conditions of the central Amazon basin, served as a natural laboratory. The present paper, as the introduction to the special issue of GoAmazon2014/5, presents the context and motivation of the GoAmazon2014/5 Experiment. The nine research sites, including the characteristics and instrumentation of each site, are presented. The sites range from time point zero (T0) upwind of the pollution, to T1 in the midst of the pollution, to T2 just downwind of the pollution, to T3 furthest downwind of the pollution (70 km). In addition to the ground sites, a low-altitude G-159 Gulfstream I (G-1) observed the atmospheric boundary layer and low clouds, and a high-altitude Gulfstream G550 (HALO) operated in the free troposphere. During the 2-year experiment, two Intensive Operating Periods (IOP1 and IOP2) also took place that included additional specialized research instrumentation at the ground sites as well as flights of the two aircraft. GoAmazon2014/5 IOP1 was carried out from 1 February to 31 March 2014 in the wet season. GoAmazon2014/5 IOP2 was conducted from 15 August to 15 October 2014 in the dry season. In addition, the G-1 aircraft flew during both IOP1 and IOP2, and the HALO aircraft flew during IOP2. In the context of the Amazon basin, the two IOPs

  14. A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services

    ERIC Educational Resources Information Center

    Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe

    2007-01-01

    In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…

  15. Improving the quality of e-commerce web service: what is important for the request scheduling algorithm?

    NASA Astrophysics Data System (ADS)

    Suchacka, Grazyna

    2005-02-01

    The paper concerns a new research area that is Quality of Web Service (QoWS). The need for QoWS is motivated by a still growing number of Internet users, by a steady development and diversification of Web services, and especially by popularization of e-commerce applications. The goal of the paper is a critical analysis of the literature concerning scheduling algorithms for e-commerce Web servers. The paper characterizes factors affecting the load of the Web servers and discusses ways of improving their efficiency. Crucial QoWS requirements of the business Web server are identified: serving requests before their individual deadlines, supporting user session integrity, supporting different classes of users and minimizing a number of rejected requests. It is justified that meeting these requirements and implementing them in an admission control (AC) and scheduling algorithm for the business Web server is crucial to the functioning of e-commerce Web sites and revenue generated by them. The paper presents results of the literature analysis and discusses algorithms that implement these important QoWS requirements. The analysis showed that very few algorithms take into consideration the above mentioned factors and that there is a need for designing an algorithm implementing them.

  16. MsLDR-creator: a web service to design msLDR assays.

    PubMed

    Bormann, Felix; Dahl, Andreas; Sers, Christine

    2012-03-01

    MsLDR-creator is a free web service to design assays for the new DNA methylation detection method msLDR. The service provides the user with all necessary information about the oligonucleotides required for the measurement of a given CpG within a sequence of interest. The parameters are calculated by the nearest neighbour approach to achieve optimal behaviour during the experimental procedure. In addition, to guarantee a good start using msLDR, further information, like protocols and hints and tricks, are provided.

  17. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*.

    PubMed

    Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa

    2010-08-21

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  18. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    PubMed Central

    2010-01-01

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200

  19. SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering

    DTIC Science & Technology

    2013-11-01

    IaaS) cloud frameworks including Amazon Web Services and Eucalyptus . For load testing, we used The Grinder [9], a Java load testing framework that...internal Eucalyptus cluster which we could not scale as large as the Amazon environment due to a lack of computation resources. We recreated our

  20. Building to Scale: An Analysis of Web-Based Services in CIC (Big Ten) Libraries.

    ERIC Educational Resources Information Center

    Dewey, Barbara I.

    Advancing library services in large universities requires creative approaches for "building to scale." This is the case for CIC, Committee on Institutional Cooperation (Big Ten), libraries whose home institutions serve thousands of students, faculty, staff, and others. Developing virtual Web-based services is an increasingly viable…

  1. Efficient Web Services Policy Combination

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Harman, Joseph G.

    2010-01-01

    Large-scale Web security systems usually involve cooperation between domains with non-identical policies. The network management and Web communication software used by the different organizations presents a stumbling block. Many of the tools used by the various divisions do not have the ability to communicate network management data with each other. At best, this means that manual human intervention into the communication protocols used at various network routers and endpoints is required. Developing practical, sound, and automated ways to compose policies to bridge these differences is a long-standing problem. One of the key subtleties is the need to deal with inconsistencies and defaults where one organization proposes a rule on a particular feature, and another has a different rule or expresses no rule. A general approach is to assign priorities to rules and observe the rules with the highest priorities when there are conflicts. The present methods have inherent inefficiency, which heavily restrict their practical applications. A new, efficient algorithm combines policies utilized for Web services. The method is based on an algorithm that allows an automatic and scalable composition of security policies between multiple organizations. It is based on defeasible policy composition, a promising approach for finding conflicts and resolving priorities between rules. In the general case, policy negotiation is an intractable problem. A promising method, suggested in the literature, is when policies are represented in defeasible logic, and composition is based on rules for non-monotonic inference. In this system, policy writers construct metapolicies describing both the policy that they wish to enforce and annotations describing their composition preferences. These annotations can indicate whether certain policy assertions are required by the policy writer or, if not, under what circumstances the policy writer is willing to compromise and allow other assertions to take

  2. Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo

    2010-05-01

    In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an

  3. Using the Internet as a Tool for Public Service: Creating a Community History Web Site

    ERIC Educational Resources Information Center

    Henson, Darold Leigh

    2005-01-01

    Creating a community history Web site is a way for technical communication practitioners, students, and teachers to improve their expertise while performing a valuable public service. Developers of this kind of Web site combine personal interest in the history and culture of their chosen communities with professional interest in a wide range of…

  4. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home

    PubMed Central

    Chen, Peng-Yu

    2017-01-01

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of “end user programmable smart environments”. PMID:28934159

  5. ROSA: Resource-Oriented Service Management Schemes for Web of Things in a Smart Home.

    PubMed

    Liao, Chun-Feng; Chen, Peng-Yu

    2017-09-21

    A Pervasive-computing-enriched smart home environment, which contains many embedded and tiny intelligent devices and sensors coordinated by service management mechanisms, is capable of anticipating intentions of occupants and providing appropriate services accordingly. Although there are a wealth of research achievements in recent years, the degree of market acceptance is still low. The main reason is that most of the devices and services in such environments depend on particular platform or technology, making it hard to develop an application by composing the devices or services. Meanwhile, the concept of Web of Things (WoT) is becoming popular recently. Based on WoT, the developers can build applications based on popular web tools or technologies. Consequently, the objective of this paper is to propose a set of novel WoT-driven plug-and-play service management schemes for a smart home called Resource-Oriented Service Administration (ROSA). We have implemented an application prototype, and experiments are performed to show the effectiveness of the proposed approach. The results of this research can be a foundation for realizing the vision of "end user programmable smart environments".

  6. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  7. Business Models of E-Government: Research on Dynamic E-Government Based on Web Services

    NASA Astrophysics Data System (ADS)

    Li, Yan; Yang, Jiumin

    Government transcends all sectors in a society. It provides not only the legal, political and economic infrastructure to support other sectors, but also exerts significant influence on the social factors that contribute to their development. With its maturity of technologies and management, e-government will eventually enter into the time of 'one-stop' services. Among others, the technology of Web services is the major contributor to this achievement. Web services provides a new way of standard-based software technology, letting programmers combine existing computer system in new ways over the Internet within one business or across many, and would thereby bring about profound and far-reaching impacts on e-government. This paper introduced the business modes of e-government, architecture of dynamic e-government and its key technologies. Finally future prospect of dynamic e-government was also briefly discussed.

  8. 78 FR 26664 - Submission for Review: CyberCorps®: Scholarship For Service (SFS) Registration Web Site

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-07

    ... OFFICE OF PERSONNEL MANAGEMENT Submission for Review: CyberCorps[supreg]: Scholarship For Service (SFS) Registration Web Site AGENCY: U.S. Office of Personnel Management. ACTION: 60-Day Notice and... of the Web page is necessary to facilitate the timely registration, selection and placement of...

  9. Co-creating and Evaluating a Web-app Mapping Real-World Health Care Services for Students: The servi-Share Protocol.

    PubMed

    Montagni, Ilaria; Langlois, Emmanuel; Wittwer, Jérôme; Tzourio, Christophe

    2017-02-16

    University students aged 18-30 years are a population group reporting low access to health care services, with high rates of avoidance and delay of medical care. This group also reports not having appropriate information about available health care services. However, university students are at risk for several health problems, and regular medical consultations are recommended in this period of life. New digital devices are popular among the young, and Web-apps can be used to facilitate easy access to information regarding health care services. A small number of electronic health (eHealth) tools have been developed with the purpose of displaying real-world health care services, and little is known about how such eHealth tools can improve access to care. This paper describes the processes of co-creating and evaluating the beta version of a Web-app aimed at mapping and describing free or low-cost real-world health care services available in the Bordeaux area of France, which is specifically targeted to university students. The co-creation process involves: (1) exploring the needs of students to know and access real-world health care services; (2) identifying the real-world health care services of interest for students; and (3) deciding on a user interface, and developing the beta version of the Web-app. Finally, the evaluation process involves: (1) testing the beta version of the Web-app with the target audience (university students aged 18-30 years); (2) collecting their feedback via a satisfaction survey; and (3) planning a long-term evaluation. The co-creation process of the beta version of the Web-app was completed in August 2016 and is described in this paper. The evaluation process started on September 7, 2016. The project was completed in December 2016 and implementation of the Web-app is ongoing. Web-apps are an innovative way to increase the health literacy of young people in terms of delivery of and access to health care. The creation of Web-apps benefits

  10. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  11. The Investigation of Pre-Service Teachers' Concerns about Integrating Web 2.0 Technologies into Instruction

    ERIC Educational Resources Information Center

    Hao, Yungwei; Wang, Shiou-ling; Chang, Su-jen; Hsu, Yin-hung; Tang, Ren-yen

    2013-01-01

    Studies indicated Web 2.0 technologies can support learning. Then, integration of innovation may create concerns among teachers because of the innovative features. In this study, the innovation refers to Web 2.0 technology integration into instruction. To help pre-service teachers make the best use of the innovation in their future instruction, it…

  12. Remote health monitoring using mobile phones and Web services.

    PubMed

    Agarwal, Sparsh; Lau, Chiew Tong

    2010-06-01

    Diabetes and hypertension have become very common perhaps because of increasingly busy lifestyles, unhealthy eating habits, and a highly competitive workplace. The rapid advancement of mobile communication technologies offers innumerable opportunities for the development of software and hardware applications for remote monitoring of such chronic diseases. This study describes a remote health-monitoring service that provides an end-to-end solution, that is, (1) it collects blood pressure readings from the patient through a mobile phone; (2) it provides these data to doctors through a Web interface; and (3) it enables doctors to manage the chronic condition by providing feedback to the patients remotely. This article also aims at understanding the requirements and expectations of doctors and hospitals from such a remote health-monitoring service.

  13. Deforestation effects on Amazon forest resilience

    NASA Astrophysics Data System (ADS)

    Zemp, D. C.; Schleussner, C.-F.; Barbosa, H. M. J.; Rammig, A.

    2017-06-01

    Through vegetation-atmosphere feedbacks, rainfall reductions as a result of Amazon deforestation could reduce the resilience on the remaining forest to perturbations and potentially lead to large-scale Amazon forest loss. We track observation-based water fluxes from sources (evapotranspiration) to sinks (rainfall) to assess the effect of deforestation on continental rainfall. By studying 21st century deforestation scenarios, we show that deforestation can reduce dry season rainfall by up to 20% far from the deforested area, namely, over the western Amazon basin and the La Plata basin. As a consequence, forest resilience is systematically eroded in the southwestern region covering a quarter of the current Amazon forest. Our findings suggest that the climatological effects of deforestation can lead to permanent forest loss in this region. We identify hot spot regions where forest loss should be avoided to maintain the ecological integrity of the Amazon forest.

  14. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification

    PubMed Central

    Wiegers, Thomas C.; Davis, Allan Peter; Mattingly, Carolyn J.

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and

  15. MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.

    PubMed

    Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G

    2012-12-07

    MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.

  16. Hydrologic resilience and Amazon productivity.

    PubMed

    Ahlström, Anders; Canadell, Josep G; Schurgers, Guy; Wu, Minchao; Berry, Joseph A; Guan, Kaiyu; Jackson, Robert B

    2017-08-30

    The Amazon rainforest is disproportionately important for global carbon storage and biodiversity. The system couples the atmosphere and land, with moist forest that depends on convection to sustain gross primary productivity and growth. Earth system models that estimate future climate and vegetation show little agreement in Amazon simulations. Here we show that biases in internally generated climate, primarily precipitation, explain most of the uncertainty in Earth system model results; models, empirical data and theory converge when precipitation biases are accounted for. Gross primary productivity, above-ground biomass and tree cover align on a hydrological relationship with a breakpoint at ~2000 mm annual precipitation, where the system transitions between water and radiation limitation of evapotranspiration. The breakpoint appears to be fairly stable in the future, suggesting resilience of the Amazon to climate change. Changes in precipitation and land use are therefore more likely to govern biomass and vegetation structure in Amazonia.Earth system model simulations of future climate in the Amazon show little agreement. Here, the authors show that biases in internally generated climate explain most of this uncertainty and that the balance between water-saturated and water-limited evapotranspiration controls the Amazon resilience to climate change.

  17. OpenFIRE - A Web GIS Service for Distributing the Finnish Reflection Experiment Datasets

    NASA Astrophysics Data System (ADS)

    Väkevä, Sakari; Aalto, Aleksi; Heinonen, Aku; Heikkinen, Pekka; Korja, Annakaisa

    2017-04-01

    The Finnish Reflection Experiment (FIRE) is a land-based deep seismic reflection survey conducted between 2001 and 2003 by a research consortium of the Universities of Helsinki and Oulu, the Geological Survey of Finland, and a Russian state-owned enterprise SpetsGeofysika. The dataset consists of 2100 kilometers of high-resolution profiles across the Archaean and Proterozoic nuclei of the Fennoscandian Shield. Although FIRE data have been available on request since 2009, the data have remained underused outside the original research consortium. The original FIRE data have been quality-controlled. The shot gathers have been cross-checked and comprehensive errata has been created. The brute stacks provided by the Russian seismic contractor have been reprocessed into seismic sections and replotted. A complete documentation of the intermediate processing steps is provided together with guidelines for setting up a computing environment and plotting the data. An open access web service "OpenFIRE" for the visualization and the downloading of FIRE data has been created. The service includes a mobile-responsive map application capable of enriching seismic sections with data from other sources such as open data from the National Land Survey and the Geological Survey of Finland. The AVAA team of the Finnish Open Science and Research Initiative has provided a tailored Liferay portal with necessary web components such as an API (Application Programming Interface) for download requests. INSPIRE (Infrastructure for Spatial Information in Europe) -compliant discovery metadata have been produced and geospatial data will be exposed as Open Geospatial Consortium standard services. The technical guidelines of the European Plate Observing System have been followed and the service could be considered as a reference application for sharing reflection seismic data. The OpenFIRE web service is available at www.seismo.helsinki.fi/openfire

  18. Interoperable web applications for sharing data and products of the International DORIS Service

    NASA Astrophysics Data System (ADS)

    Soudarin, L.; Ferrage, P.

    2017-12-01

    The International DORIS Service (IDS) was created in 2003 under the umbrella of the International Association of Geodesy (IAG) to foster scientific research related to the French satellite tracking system DORIS and to deliver scientific products, mostly related to the International Earth rotation and Reference systems Service (IERS). Since its start, the organization has continuously evolved, leading to additional and improved operational products from an expanded set of DORIS Analysis Centers. In addition, IDS has developed services for sharing data and products with the users. Metadata and interoperable web applications are proposed to explore, visualize and download the key products such as the position time series of the geodetic points materialized at the ground tracking stations. The Global Geodetic Observing System (GGOS) encourages the IAG Services to develop such interoperable facilities on their website. The objective for GGOS is to set up an interoperable portal through which the data and products produced by the IAG Services can be served to the user community. We present the web applications proposed by IDS to visualize time series of geodetic observables or to get information about the tracking ground stations and the tracked satellites. We discuss the future plans for IDS to meet the recommendations of GGOS. The presentation also addresses the needs for the IAG Services to adopt common metadata thesaurus to describe data and products, and interoperability standards to share them.

  19. A user experience evaluation of Amazon Kindle mobile application

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Musa, Ja'afaru; Mortada, Salah

    2017-10-01

    There is a dramatic increase in the development of mobile applications in recent years. This makes the usability evaluation of these mobile applications an important aspect in the advancement and application of technology. In this paper, a laboratory-based usability evaluation was carried out on the Amazon Kindle app using 15 users who performed 5 tasks on the Kindle e-book mobile app. A post-test questionnaire was administered to elicit users' perception on the usability of the application. The results demonstrate that almost all the participants were satisfied with services provided by the Amazon Kindle e-book mobile app. On all the four user experience factors examined, namely, perceived ease-of-use, perceived visibility, perceived enjoyabilty, and perceived efficiency, the evaluation outcome shows that the participants had a good and rich mobile experience with the application.

  20. Web-Based Medical Service: Technology Attractiveness, Medical Creditability, Information Source, and Behavior Intention

    PubMed Central

    2017-01-01

    Background Web-based medical service (WBMS), a cooperative relationship between medical service and Internet technology, has been called one of the most innovative services of the 21st century. However, its business promotion and implementation in the medical industry have neither been expected nor executed. Few studies have explored this phenomenon from the viewpoint of inexperienced patients. Objective The primary goal of this study was to explore whether technology attractiveness, medical creditability, and diversified medical information sources could increase users’ behavior intention. Methods This study explored the effectiveness of web-based medical service by using three situations to manipulate sources of medical information. A total of 150 questionnaires were collected from people who had never used WBMS before. Hierarchical regression was used to examine the mediation and moderated-mediation effects. Results Perceived ease of use (P=.002) and perceived usefulness (P=.001) significantly enhance behavior intentions. Medical credibility is a mediator (P=.03), but the relationship does not significantly differ under diverse manipulative information channels (P=.39). Conclusions Medical credibility could explain the extra variation between technology attractiveness and behavior intention, but not significant under different moderating effect of medical information sources. PMID:28768608

  1. T-Check in Technologies for Interoperability: Web Services and Security--Single Sign-On

    DTIC Science & Technology

    2007-12-01

    following tools: • Apache Tomcat 6.0—a Java Servlet container to host the Web services and a simple Web client application [Apache 2007a] • Apache Axis...Eclipse. Eclipse – an open development platform. http://www.eclipse.org/ (2007) [Hunter 2001] Hunter, Jason. Java Servlet Programming, 2nd Edition...Citation SAML 1.1 Java Toolkit SAML Ping Identity’s SAML-1.1 implementation [SourceID 2006] OpenSAML SAML An open source implementation of SAML 1.1

  2. Migrating Department of Defense (DoD) Web Service Based Applications to Mobile Computing Platforms

    DTIC Science & Technology

    2012-03-01

    World Wide Web Consortium (W3C) Geolocation API to identify the device’s location and then center the map on the device. Finally, we modify the entry...THIS PAGE INTENTIONALLY LEFT BLANK xii List of Acronyms and Abbreviations API Application Programming Interface CSS Cascading Style Sheets CLIMO...Java API for XML Web Services Reference Implementation JS JavaScript JSNI JavaScript Native Interface METOC Meteorological and Oceanographic MAA Mobile

  3. SAS- Semantic Annotation Service for Geoscience resources on the web

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  4. A RESTful interface to pseudonymization services in modern web applications.

    PubMed

    Lablans, Martin; Borg, Andreas; Ückert, Frank

    2015-02-07

    Medical research networks rely on record linkage and pseudonymization to determine which records from different sources relate to the same patient. To establish informational separation of powers, the required identifying data are redirected to a trusted third party that has, in turn, no access to medical data. This pseudonymization service receives identifying data, compares them with a list of already reported patient records and replies with a (new or existing) pseudonym. We found existing solutions to be technically outdated, complex to implement or not suitable for internet-based research infrastructures. In this article, we propose a new RESTful pseudonymization interface tailored for use in web applications accessed by modern web browsers. The interface is modelled as a resource-oriented architecture, which is based on the representational state transfer (REST) architectural style. We translated typical use-cases into resources to be manipulated with well-known HTTP verbs. Patients can be re-identified in real-time by authorized users' web browsers using temporary identifiers. We encourage the use of PID strings for pseudonyms and the EpiLink algorithm for record linkage. As a proof of concept, we developed a Java Servlet as reference implementation. The following resources have been identified: Sessions allow data associated with a client to be stored beyond a single request while still maintaining statelessness. Tokens authorize for a specified action and thus allow the delegation of authentication. Patients are identified by one or more pseudonyms and carry identifying fields. Relying on HTTP calls alone, the interface is firewall-friendly. The reference implementation has proven to be production stable. The RESTful pseudonymization interface fits the requirements of web-based scenarios and allows building applications that make pseudonymization transparent to the user using ordinary web technology. The open-source reference implementation implements the

  5. Crowd Sourcing Data Collection through Amazon Mechanical Turk

    DTIC Science & Technology

    2013-09-01

    The first recognition study consisted of a Panel Study using a simple detection protocol, in which participants were presented with vignettes and, for...variability than the crowdsourcing data set, hewing more closely to the year 1 verbs of interest and simple description grammar . The DT:PS data were...Study RT: PS Recognition Task: Panel Study RT: RT Recognition Task: Round Table S3 Amazon Simple Storage Service SVPA Single Verb Present /Absent

  6. Producing an Infrared Multiwavelength Galactic Plane Atlas Using Montage, Pegasus, and Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Rynge, M.; Juve, G.; Kinney, J.; Good, J.; Berriman, B.; Merrihew, A.; Deelman, E.

    2014-05-01

    to use dynamically provisioned compute clusters running on the Amazon Elastic Compute Cloud (EC2). All our instances are using the same base image, which is configured to come up as a master node by default. The master node is a central instance from where the workflow can be managed. Additional worker instances are provisioned and configured to accept work assignments from the master node. The system allows for adding/removing workers in an ad hoc fashion, and could be run in large configurations. To-date we have performed 245,000 CPU hours of computing and generated 7,029 images and totaling 30 TB. With the current set up our runtime would be 340,000 CPU hours for the whole project. Using spot m2.4xlarge instances, the cost would be approximately $5,950. Using faster AWS instances, such as cc2.8xlarge could potentially decrease the total CPU hours and further reduce the compute costs. The paper will explore these tradeoffs.

  7. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    PubMed

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  8. AmazonFACE: Assessing the Effects of Increasing Atmospheric CO2 on the Resilience of the Amazon Forest through Integrative Model-Experiment Research

    NASA Astrophysics Data System (ADS)

    Lapola, D. M.

    2015-12-01

    The existence, magnitude and duration of a supposed "CO2 fertilization" effect in tropical forests remains largely undetermined, despite being suggested for nearly 20 years as a key knowledge gap for understanding the future resilience of Amazonian forests and its impact on the global carbon cycle. Reducing this uncertainty is critical for assessing the future of the Amazon region as well as its vulnerability to climate change. The AmazonFACE (Free-Air CO2 Enrichment) research program is an integrated model-experiment initiative of unprecedented scope in an old-growth Amazon forest near Manaus, Brazil - the first of its kind in tropical forest. The experimental treatment will simulate an atmospheric CO2 concentration [CO2] of the future in order to address the question: "How will rising atmospheric CO2 affect the resilience of the Amazon forest, the biodiversity it harbors, and the ecosystem services it provides, in light of projected climatic changes?" AmazonFACE is divided into three phases: (I) pre-experimental ecological characterization of the research site; (II) pilot experiment comprised of two 30-m diameter plots, with one treatment plot maintained at elevated [CO2] (ambient +200 ppmv), and the other control plot at ambient [CO2]; and (III) a fully-replicated long-term experiment comprised of four pairs of control/treatment FACE plots maintained for 10 years. A team of scientists from Brazil, USA, Australia and Europe will employ state-of-the-art methods to study the forest inside these plots in terms of carbon metabolism and cycling, water use, nutrient cycling, forest community composition, and interactions with environmental stressors. All project phases also encompass ecosystem-modeling activities in a way such that models provide hypothesis to be verified in the experiment, which in turn will feed models to ultimately produce more accurate projections of the environment. Resulting datasets and analyses will be a valuable resource for a broad community

  9. The AmazonFACE research program: assessing the effects of increasing atmospheric CO2 on the ecology and resilience of the Amazon forest

    NASA Astrophysics Data System (ADS)

    Lapola, David; Quesada, Carlos; Norby, Richard; Araújo, Alessandro; Domingues, Tomas; Hartley, Iain; Kruijt, Bart; Lewin, Keith; Meir, Patrick; Ometto, Jean; Rammig, Anja

    2016-04-01

    The existence, magnitude and duration of a supposed "CO2 fertilization" effect in tropical forests remains largely undetermined, despite being suggested for nearly 20 years as a key knowledge gap for understanding the future resilience of Amazonian forests and its impact on the global carbon cycle. Reducing this uncertainty is critical for assessing the future of the Amazon region as well as its vulnerability to climate change. The AmazonFACE (Free-Air CO2 Enrichment) research program is an integrated model-experiment initiative of unprecedented scope in an old-growth Amazon forest near Manaus, Brazil - the first of its kind in tropical forest. The experimental treatment will simulate an atmospheric CO2 concentration [CO2] of the future in order to address the question: "How will rising atmospheric CO2 affect the resilience of the Amazon forest, the biodiversity it harbors, and the ecosystem services it provides, in light of projected climatic changes?" AmazonFACE is divided into three phases: (I) pre-experimental ecological characterization of the research site; (II) pilot experiment comprised of two 30-m diameter plots, with one treatment plot maintained at elevated [CO2] (ambient +200 ppmv), and the other control plot at ambient [CO2]; and (III) a fully-replicated long-term experiment comprised of four pairs of control/treatment FACE plots maintained for 10 years. A team of scientists from Brazil, USA, Australia and Europe will employ state-of-the-art methods to study the forest inside these plots in terms of carbon metabolism and cycling, water use, nutrient cycling, forest community composition, and interactions with environmental stressors. All project phases also encompass ecosystem-modeling activities in a way such that models provide hypothesis to be verified in the experiment, which in turn will feed models to ultimately produce more accurate projections of the environment. Resulting datasets and analyses will be a valuable resource for a broad community

  10. Twitter web-service for soft agent reporting in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2010-04-01

    Persistent surveillance is an intricate process requiring monitoring, gathering, processing, tracking, and characterization of many spatiotemporal events occurring concurrently. Data associated with events can be readily attained by networking of hard (physical) sensors. Sensors may have homogeneous or heterogeneous (hybrid) sensing modalities with different communication bandwidth requirements. Complimentary to hard sensors are human observers or "soft sensors" that can report occurrences of evolving events via different communication devices (e.g., texting, cell phones, emails, instant messaging, etc.) to the command control center. However, networking of human observers in ad-hoc way is rather a difficult task. In this paper, we present a Twitter web-service for soft agent reporting in persistent surveillance systems (called Web-STARS). The objective of this web-service is to aggregate multi-source human observations in hybrid sensor networks rapidly. With availability of Twitter social network, such a human networking concept can not only be realized for large scale persistent surveillance systems (PSS), but also, it can be employed with proper interfaces to expedite rapid events reporting by human observers. The proposed technique is particularly suitable for large-scale persistent surveillance systems with distributed soft and hard sensor networks. The efficiency and effectiveness of the proposed technique is measured experimentally by conducting several simulated persistent surveillance scenarios. It is demonstrated that by fusion of information from hard and soft agents improves understanding of common operating picture and enhances situational awareness.

  11. Modelling multiple threats to water security in the Peruvian Amazon using the WaterWorld Policy Support System

    NASA Astrophysics Data System (ADS)

    van Soesbergen, A. J. J.; Mulligan, M.

    2013-06-01

    This paper explores a multitude of threats to water security in the Peruvian Amazon using the WaterWorld policy support system. WaterWorld is a spatially explicit, physically-based globally-applicable model for baseline and scenario water balance that is particularly well suited to heterogeneous environments with little locally available data (e.g. ungauged basins) and which is delivered through a simple web interface, requiring little local capacity for use. The model is capable of producing a hydrological baseline representing the mean water balance for 1950-2000 and allows for examining impacts of population, climate and land use change as well as land and water management interventions on hydrology. This paper describes the application of WaterWorld to the Peruvian Amazon, an area that is increasingly under pressure from deforestation and water pollution as a result of population growth, rural to urban migration and oil and gas extraction, potentially impacting both water quantity and water quality. By applying single and combined scenarios of: climate change, deforestation around existing and planned roads, population growth and rural-urban migration, mining and oil and gas exploitation, we explore the potential combined impacts of these multiple changes on water resources in the Peruvian Amazon and discuss the likely pathways for adaptation to and mitigation against their worst effects. See Mulligan et al. (2013) for a similar analysis for the entire Amazon Basin.

  12. The Experiences of Older Students' Use of Web-Based Student Services

    ERIC Educational Resources Information Center

    Ho, Katy W.

    2012-01-01

    The purpose of this phenomenological case study was to understand the experiences of older students' use of web-based student services in a community college setting. For the purpose of this study the term "older student" was defined as people born between the years 1943 and 1960. This group of people, often described as the Baby Boomer…

  13. Remote Sensing Information Gateway: A free application and web service for fast, convenient, interoperable access to large repositories of atmospheric data

    NASA Astrophysics Data System (ADS)

    Plessel, T.; Szykman, J.; Freeman, M.

    2012-12-01

    EPA's Remote Sensing Information Gateway (RSIG) is a widely used free applet and web service for quickly and easily retrieving, visualizing and saving user-specified subsets of atmospheric data - by variable, geographic domain and time range. Petabytes of available data include thousands of variables from a set of NASA and NOAA satellites, aircraft, ground stations and EPA air-quality models. The RSIG applet is used by atmospheric researchers and uses the rsigserver web service to obtain data and images. The rsigserver web service is compliant with the Open Geospatial Consortium Web Coverage Service (OGC-WCS) standard to facilitate data discovery and interoperability. Since rsigserver is publicly accessible, it can be (and is) used by other applications. This presentation describes the architecture and technical implementation details of this successful system with an emphasis on achieving convenience, high-performance, data integrity and security.

  14. Web based aphasia test using service oriented architecture (SOA)

    NASA Astrophysics Data System (ADS)

    Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.

    2007-11-01

    Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.

  15. Oh! Web 2.0, Virtual Reference Service 2.0, Tools & Techniques (II)

    ERIC Educational Resources Information Center

    Arya, Harsh Bardhan; Mishra, J. K.

    2012-01-01

    The paper describes the theory and definition of the practice of librarianship, specifically addressing how Web 2.0 technologies (tools) such as synchronous messaging, collaborative reference service and streaming media, blogs, wikis, social networks, social bookmarking tools, tagging, RSS feeds, and mashups might intimate changes and how…

  16. Socio-ecological costs of Amazon nut and timber production at community household forests in the Bolivian Amazon.

    PubMed

    Soriano, Marlene; Mohren, Frits; Ascarrunz, Nataly; Dressler, Wolfram; Peña-Claros, Marielos

    2017-01-01

    The Bolivian Amazon holds a complex configuration of people and forested landscapes in which communities hold secure tenure rights over a rich ecosystem offering a range of livelihood income opportunities. A large share of this income is derived from Amazon nut (Bertholletia excelsa). Many communities also have long-standing experience with community timber management plans. However, livelihood needs and desires for better living conditions may continue to place these resources under considerable stress as income needs and opportunities intensify and diversify. We aim to identify the socioeconomic and biophysical factors determining the income from forests, husbandry, off-farm and two keystone forest products (i.e., Amazon nut and timber) in the Bolivian Amazon region. We used structural equation modelling tools to account for the complex inter-relationships between socioeconomic and biophysical factors in predicting each source of income. The potential exists to increase incomes from existing livelihood activities in ways that reduce dependency upon forest resources. For example, changes in off-farm income sources can act to increase or decrease forest incomes. Market accessibility, social, financial, and natural and physical assets determined the amount of income community households could derive from Amazon nut and timber. Factors related to community households' local ecological knowledge, such as the number of non-timber forest products harvested and the number of management practices applied to enhance Amazon nut production, defined the amount of income these households could derive from Amazon nut and timber, respectively. The (inter) relationships found among socioeconomic and biophysical factors over income shed light on ways to improve forest-dependent livelihoods in the Bolivian Amazon. We believe that our analysis could be applicable to other contexts throughout the tropics as well.

  17. Socio-ecological costs of Amazon nut and timber production at community household forests in the Bolivian Amazon

    PubMed Central

    Mohren, Frits; Ascarrunz, Nataly; Dressler, Wolfram; Peña-Claros, Marielos

    2017-01-01

    The Bolivian Amazon holds a complex configuration of people and forested landscapes in which communities hold secure tenure rights over a rich ecosystem offering a range of livelihood income opportunities. A large share of this income is derived from Amazon nut (Bertholletia excelsa). Many communities also have long-standing experience with community timber management plans. However, livelihood needs and desires for better living conditions may continue to place these resources under considerable stress as income needs and opportunities intensify and diversify. We aim to identify the socioeconomic and biophysical factors determining the income from forests, husbandry, off-farm and two keystone forest products (i.e., Amazon nut and timber) in the Bolivian Amazon region. We used structural equation modelling tools to account for the complex inter-relationships between socioeconomic and biophysical factors in predicting each source of income. The potential exists to increase incomes from existing livelihood activities in ways that reduce dependency upon forest resources. For example, changes in off-farm income sources can act to increase or decrease forest incomes. Market accessibility, social, financial, and natural and physical assets determined the amount of income community households could derive from Amazon nut and timber. Factors related to community households’ local ecological knowledge, such as the number of non-timber forest products harvested and the number of management practices applied to enhance Amazon nut production, defined the amount of income these households could derive from Amazon nut and timber, respectively. The (inter) relationships found among socioeconomic and biophysical factors over income shed light on ways to improve forest-dependent livelihoods in the Bolivian Amazon. We believe that our analysis could be applicable to other contexts throughout the tropics as well. PMID:28235090

  18. Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in Taiwan.

    PubMed

    Chen, Hong-Ren; Tseng, Hsiao-Fen

    2012-08-01

    Web-based e-learning is not restricted by time or place and can provide teachers with a learning environment that is flexible and convenient, enabling them to efficiently learn, quickly develop their professional expertise, and advance professionally. Many research reports on web-based e-learning have neglected the role of the teacher's perspective in the acceptance of using web-based e-learning systems for in-service education. We distributed questionnaires to 402 junior high school teachers in central Taiwan. This study used the Technology Acceptance Model (TAM) as our theoretical foundation and employed the Structure Equation Model (SEM) to examine factors that influenced intentions to use in-service training conducted through web-based e-learning. The results showed that motivation to use and Internet self-efficacy were significantly positively associated with behavioral intentions regarding the use of web-based e-learning for in-service training through the factors of perceived usefulness and perceived ease of use. The factor of computer anxiety had a significantly negative effect on behavioral intentions toward web-based e-learning in-service training through the factor of perceived ease of use. Perceived usefulness and motivation to use were the primary reasons for the acceptance by junior high school teachers of web-based e-learning systems for in-service training. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Security of the Brazilian Amazon Area

    DTIC Science & Technology

    1992-04-01

    effect in Amazonia". Brazil’s Institute for Space Research. Sio Paulo, April 1991: 5-6. Thompson, Dick. "A Global Agenda for the Amazon." Time, 18...to be overcome as Brazil pursues settlement and development of the Amazon. The natural ecologic systems of the Amazon must be defended with...agricultural techniques appropriate to the region and developed within the context of a comprehensive, responsible program that meets Brazil’s needs for

  20. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data

  1. A Survey of the Usability of Digital Reference Services on Academic Health Science Library Web Sites

    ERIC Educational Resources Information Center

    Dee, Cheryl; Allen, Maryellen

    2006-01-01

    Reference interactions with patrons in a digital library environment using digital reference services (DRS) has become widespread. However, such services in many libraries appear to be underutilized. A study surveying the ease and convenience of such services for patrons in over 100 academic health science library Web sites suggests that…

  2. Exploring U.S Cropland - A Web Service based Cropland Data Layer Visualization, Dissemination and Querying System (Invited)

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Han, W.; di, L.

    2010-12-01

    The National Agricultural Statistics Service (NASS) of the USDA produces the Cropland Data Layer (CDL) product, which is a raster-formatted, geo-referenced, U.S. crop specific land cover classification. These digital data layers are widely used for a variety of applications by universities, research institutions, government agencies, and private industry in climate change studies, environmental ecosystem studies, bioenergy production & transportation planning, environmental health research and agricultural production decision making. The CDL is also used internally by NASS for crop acreage and yield estimation. Like most geospatial data products, the CDL product is only available by CD/DVD delivery or online bulk file downloading via the National Research Conservation Research (NRCS) Geospatial Data Gateway (external users) or in a printed paper map format. There is no online geospatial information access and dissemination, no crop visualization & browsing, no geospatial query capability, nor online analytics. To facilitate the application of this data layer and to help disseminating the data, a web-service based CDL interactive map visualization, dissemination, querying system is proposed. It uses Web service based service oriented architecture, adopts open standard geospatial information science technology and OGC specifications and standards, and re-uses functions/algorithms from GeoBrain Technology (George Mason University developed). This system provides capabilities of on-line geospatial crop information access, query and on-line analytics via interactive maps. It disseminates all data to the decision makers and users via real time retrieval, processing and publishing over the web through standards-based geospatial web services. A CDL region of interest can also be exported directly to Google Earth for mashup or downloaded for use with other desktop application. This web service based system greatly improves equal-accessibility, interoperability, usability

  3. Sharing environmental models: An Approach using GitHub repositories and Web Processing Services

    NASA Astrophysics Data System (ADS)

    Stasch, Christoph; Nuest, Daniel; Pross, Benjamin

    2016-04-01

    The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project established a spatial data infrastructure for scientific geospatial data and metadata (http://geoportal-glues.ufz.de), where different regional collaborative projects researching the impacts of climate and socio-economic changes on sustainable land management can share their underlying base scenarios and datasets. One goal of the project is to ease the sharing of computational models between institutions and to make them easily executable in Web-based infrastructures. In this work, we present such an approach for sharing computational models relying on GitHub repositories (http://github.com) and Web Processing Services. At first, model providers upload their model implementations to GitHub repositories in order to share them with others. The GitHub platform allows users to submit changes to the model code. The changes can be discussed and reviewed before merging them. However, while GitHub allows sharing and collaborating of model source code, it does not actually allow running these models, which requires efforts to transfer the implementation to a model execution framework. We thus have extended an existing implementation of the OGC Web Processing Service standard (http://www.opengeospatial.org/standards/wps), the 52°North Web Processing Service (http://52north.org/wps) platform to retrieve all model implementations from a git (http://git-scm.com) repository and add them to the collection of published geoprocesses. The current implementation is restricted to models implemented as R scripts using WPS4R annotations (Hinz et al.) and to Java algorithms using the 52°North WPS Java API. The models hence become executable through a standardized Web API by multiple clients such as desktop or browser GIS and modelling frameworks. If the model code is changed on the GitHub platform, the changes are retrieved by the service and the processes will be updated

  4. A Different Web-Based Geocoding Service Using Fuzzy Techniques

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Abbaspour, R. A.; Zare Zadiny, A.

    2015-12-01

    Geocoding - the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  5. The Amazons and an analysis of breast mutilation from a plastic surgeon's perspective.

    PubMed

    Karacalar, Ahmet

    2007-03-01

    The Amazon philosophy has been increasing in popularity because of the evolving status of women in society. Many references point to Themiscrya on the southern coast of the Black Sea in Anatolia as the Amazon homeland. The primary objective of this article is to discuss the different femininity of the Amazons and their breast mutilation from the perspective of a plastic surgeon who has been living in this region that the Amazons inhabited. Findings from archaeology, linguistics, anthropology, medicine, history, psychology, and the fine arts were integrated. The hypotheses that have been proposed to explain the method of breast mutilation include amputation, cauterization, breast searing, and breast pinching. It is generally believed that the primary purpose was to facilitate the efficient use of a bow. Another explanation would be that breast mutilation was performed for medical reasons, including the prevention of breast pain, the development of a tender lump, or cancer. There is another school of thought on this involving religious and sociological reasons that breast mutilation was a badge of honor for warrior women and a sign that a woman had become a real warrior and a sacrifice to Artemis as a sign of service. Much indirect proof and archaeological evidence point to their historical existence. The Amazons, who lived in an autonomous and original social model, changed their image and function to suit the needs of the society and the times.

  6. The use of geospatial web services for exchanging utilities data

    NASA Astrophysics Data System (ADS)

    Kuczyńska, Joanna

    2013-04-01

    Geographic information technologies and related geo-information systems currently play an important role in the management of public administration in Poland. One of these tasks is to maintain and update Geodetic Evidence of Public Utilities (GESUT), part of the National Geodetic and Cartographic Resource, which contains an important for many institutions information of technical infrastructure. It requires an active exchange of data between the Geodesy and Cartography Documentation Centers and institutions, which administrate transmission lines. The administrator of public utilities, is legally obliged to provide information about utilities to GESUT. The aim of the research work was to develop a universal data exchange methodology, which can be implemented on a variety of hardware and software platforms. This methodology use Unified Modeling Language (UML), eXtensible Markup Language (XML), and Geography Markup Language (GML). The proposed methodology is based on the two different strategies: Model Driven Architecture (MDA) and Service Oriented Architecture (SOA). Used solutions are consistent with the INSPIRE Directive and ISO 19100 series standards for geographic information. On the basis of analysis of the input data structures, conceptual models were built for both databases. Models were written in the universal modeling language: UML. Combined model that defines a common data structure was also built. This model was transformed into developed for the exchange of geographic information GML standard. The structure of the document describing the data that may be exchanged is defined in the .xsd file. Network services were selected and implemented in the system designed for data exchange based on open source tools. Methodology was implemented and tested. Data in the agreed data structure and metadata were set up on the server. Data access was provided by geospatial network services: data searching possibilities by Catalog Service for the Web (CSW), data

  7. Amazon Forest Responses to Drought and Fire

    NASA Astrophysics Data System (ADS)

    Morton, D. C.

    2015-12-01

    Deforestation and agricultural land uses provide a consistent source of ignitions along the Amazon frontier during the dry season. The risk of understory fires in Amazon forests is amplified by drought conditions, when fires at the forest edge may spread for weeks before rains begin. Fire activity also impacts the regional response of intact forests to drought through diffuse light effects and nutrient redistribution, highlighting the complexity of feedbacks in this coupled human and natural system. This talk will focus on recent advances in our understanding of fire-climate feedbacks in the Amazon, building on research themes initiated under NASA's Large-scale Biosphere-Atmosphere Experiment in Amazonia (LBA). NASA's LBA program began in the wake of the 1997-1998 El Niño, a strong event that exposed the vulnerability of Amazon forests to drought and fire under current climate and projections of climate change. With forecasts of another strong El Niño event in 2015-2016, this talk will provide a multi-scale synthesis of Amazon forest responses to drought and fire based on field measurements, airborne lidar data, and satellite observations of fires, rainfall, and terrestrial water storage. These studies offer new insights into the mechanisms governing fire season severity in the southern Amazon and regional variability in carbon losses from understory fires. The contributions from remote sensing to our understanding of drought and fire in Amazon forests reflect the legacy of NASA's LBA program and the sustained commitment to interdisciplinary research across the Amazon region.

  8. Building Geospatial Web Services for Ecological Monitoring and Forecasting

    NASA Astrophysics Data System (ADS)

    Hiatt, S. H.; Hashimoto, H.; Melton, F. S.; Michaelis, A. R.; Milesi, C.; Nemani, R. R.; Wang, W.

    2008-12-01

    The Terrestrial Observation and Prediction System (TOPS) at NASA Ames Research Center is a modeling system that generates a suite of gridded data products in near real-time that are designed to enhance management decisions related to floods, droughts, forest fires, human health, as well as crop, range, and forest production. While these data products introduce great possibilities for assisting management decisions and informing further research, realization of their full potential is complicated by their shear volume and by the need for a necessary infrastructure for remotely browsing, visualizing, and analyzing the data. In order to address these difficulties we have built an OGC-compliant WMS and WCS server based on an open source software stack that provides standardized access to our archive of data. This server is built using the open source Java library GeoTools which achieves efficient I/O and image rendering through Java Advanced Imaging. We developed spatio-temporal raster management capabilities using the PostGrid raster indexation engine. We provide visualization and browsing capabilities through a customized Ajax web interface derived from the kaMap project. This interface allows resource managers to quickly assess ecosystem conditions and identify significant trends and anomalies from within their web browser without the need to download source data or install special software. Our standardized web services also expose TOPS data to a range of potential clients, from web mapping applications to virtual globes and desktop GIS packages. However, support for managing the temporal dimension of our data is currently limited in existing software systems. Future work will attempt to overcome this shortcoming by building time-series visualization and analysis tools that can be integrated with existing geospatial software.

  9. Project Photofly: New 3d Modeling Online Web Service (case Studies and Assessments)

    NASA Astrophysics Data System (ADS)

    Abate, D.; Furini, G.; Migliori, S.; Pierattini, S.

    2011-09-01

    During summer 2010, Autodesk has released a still ongoing project called Project Photofly, freely downloadable from AutodeskLab web site until August 1 2011. Project Photofly based on computer-vision and photogrammetric principles, exploiting the power of cloud computing, is a web service able to convert collections of photographs into 3D models. Aim of our research was to evaluate the Project Photofly, through different case studies, for 3D modeling of cultural heritage monuments and objects, mostly to identify for which goals and objects it is suitable. The automatic approach will be mainly analyzed.

  10. EPEPT: A web service for enhanced P-value estimation in permutation tests

    PubMed Central

    2011-01-01

    Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting) P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values [1]. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/ PMID:22024252

  11. Building the Service-Based Library Web Site: A Step-by-Step Guide to Design and Options.

    ERIC Educational Resources Information Center

    Garlock, Kristen L.; Piontek, Sherry

    The World Wide Web, with its captivating multimedia features and hypertext capabilities, has brought millions of new users to the Internet. Library staff who could create a home page on the Web could present basic information about the library and its services, showcase its resources, create links to quality material inside and outside the…

  12. [The Amazon Sanitation Plan (1940-1942)].

    PubMed

    Andrade, Rômulo de Paula; Hochman, Gilberto

    2007-12-01

    The article addresses the Amazon Sanitation Plan and the political context in which it was formulated between 1940 and 1941. It examines the role of Getúlio Vargas, the activities of the plan's main protagonists (such as Evandro Chagas, João de Barros Barreto, and Valério Konder), its key proposals, and its demise as of 1942 upon creation of the Special Public Health Service (Sesp), which grew out of cooperation agreements between Brazil and the US following both nations' involvement in World War II. A reproduction of the Plan as published in the Arquivos de Higiene in 1941 is included.

  13. A Web Service-Based Framework Model for People-Centric Sensing Applications Applied to Social Networking

    PubMed Central

    Nunes, David; Tran, Thanh-Dien; Raposo, Duarte; Pinto, André; Gomes, André; Silva, Jorge Sá

    2012-01-01

    As the Internet evolved, social networks (such as Facebook) have bloomed and brought together an astonishing number of users. Mashing up mobile phones and sensors with these social environments enables the creation of people-centric sensing systems which have great potential for expanding our current social networking usage. However, such systems also have many associated technical challenges, such as privacy concerns, activity detection mechanisms or intermittent connectivity, as well as limitations due to the heterogeneity of sensor nodes and networks. Considering the openness of the Web 2.0, good technical solutions for these cases consist of frameworks that expose sensing data and functionalities as common Web-Services. This paper presents our RESTful Web Service-based model for people-centric sensing frameworks, which uses sensors and mobile phones to detect users’ activities and locations, sharing this information amongst the user’s friends within a social networking site. We also present some screenshot results of our experimental prototype. PMID:22438732

  14. A Web Service-based framework model for people-centric sensing applications applied to social networking.

    PubMed

    Nunes, David; Tran, Thanh-Dien; Raposo, Duarte; Pinto, André; Gomes, André; Silva, Jorge Sá

    2012-01-01

    As the Internet evolved, social networks (such as Facebook) have bloomed and brought together an astonishing number of users. Mashing up mobile phones and sensors with these social environments enables the creation of people-centric sensing systems which have great potential for expanding our current social networking usage. However, such systems also have many associated technical challenges, such as privacy concerns, activity detection mechanisms or intermittent connectivity, as well as limitations due to the heterogeneity of sensor nodes and networks. Considering the openness of the Web 2.0, good technical solutions for these cases consist of frameworks that expose sensing data and functionalities as common Web-Services. This paper presents our RESTful Web Service-based model for people-centric sensing frameworks, which uses sensors and mobile phones to detect users' activities and locations, sharing this information amongst the user's friends within a social networking site. We also present some screenshot results of our experimental prototype.

  15. Web-Based Medical Service: Technology Attractiveness, Medical Creditability, Information Source, and Behavior Intention.

    PubMed

    Wang, Shan Huei

    2017-08-02

    Web-based medical service (WBMS), a cooperative relationship between medical service and Internet technology, has been called one of the most innovative services of the 21st century. However, its business promotion and implementation in the medical industry have neither been expected nor executed. Few studies have explored this phenomenon from the viewpoint of inexperienced patients. The primary goal of this study was to explore whether technology attractiveness, medical creditability, and diversified medical information sources could increase users' behavior intention. This study explored the effectiveness of web-based medical service by using three situations to manipulate sources of medical information. A total of 150 questionnaires were collected from people who had never used WBMS before. Hierarchical regression was used to examine the mediation and moderated-mediation effects. Perceived ease of use (P=.002) and perceived usefulness (P=.001) significantly enhance behavior intentions. Medical credibility is a mediator (P=.03), but the relationship does not significantly differ under diverse manipulative information channels (P=.39). Medical credibility could explain the extra variation between technology attractiveness and behavior intention, but not significant under different moderating effect of medical information sources. ©Shan Huei Wang. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.08.2017.

  16. Design and development of a tele-healthcare information system based on web services and HL7 standards.

    PubMed

    Huang, Ean-Wen; Hung, Rui-Suan; Chiou, Shwu-Fen; Liu, Fei-Ying; Liou, Der-Ming

    2011-01-01

    Information and communication technologies progress rapidly and many novel applications have been developed in many domains of human life. In recent years, the demand for healthcare services has been growing because of the increase in the elderly population. Consequently, a number of healthcare institutions have focused on creating technologies to reduce extraneous work and improve the quality of service. In this study, an information platform for tele- healthcare services was implemented. The architecture of the platform included a web-based application server and client system. The client system was able to retrieve the blood pressure and glucose levels of a patient stored in measurement instruments through Bluetooth wireless transmission. The web application server assisted the staffs and clients in analyzing the health conditions of patients. In addition, the server provided face-to-face communications and instructions through remote video devices. The platform deployed a service-oriented architecture, which consisted of HL7 standard messages and web service components. The platform could transfer health records into HL7 standard clinical document architecture for data exchange with other organizations. The prototyping system was pretested and evaluated in a homecare department of hospital and a community management center for chronic disease monitoring. Based on the results of this study, this system is expected to improve the quality of healthcare services.

  17. I Help, Therefore, I Learn: Service Learning on Web 2.0 in an EFL Speaking Class

    ERIC Educational Resources Information Center

    Sun, Yu-Chih; Yang, Fang-Ying

    2015-01-01

    The present study integrates service learning into English as a Foreign Language (EFL) speaking class using Web 2.0 tools--YouTube and Facebook--as platforms. Fourteen undergraduate students participated in the study. The purpose of the service-learning project was to link service learning with oral communication training in an EFL speaking class…

  18. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    PubMed

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  19. A Mediator-Based Approach to Resolving Interface Heterogeneity of Web Services

    NASA Astrophysics Data System (ADS)

    Leitner, Philipp; Rosenberg, Florian; Michlmayr, Anton; Huber, Andreas; Dustdar, Schahram

    In theory, service-oriented architectures are based on the idea of increasing flexibility in the selection of internal and external business partners using loosely-coupled services. However, in practice this flexibility is limited by the fact that partners need not only to provide the same service, but to do so via virtually the same interface in order to actually be interchangeable easily. Invocation-level mediation may be used to overcome this issue — by using mediation interface differences can be resolved transparently at runtime. In this chapter we discuss the basic ideas of mediation, with a focus on interface-level mediation. We show how interface mediation is integrated into our dynamic Web service invocation framework DAIOS, and present three different mediation strategies, one based on structural message similarity, one based on semantically annotated WSDL, and one which is embedded into the VRESCo SOA runtime, a larger research project with explicit support for service mediation.

  20. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    PubMed

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  1. School Counselors’ Perspectives of a Web-Based Stepped Care Mental Health Service for Schools: Cross-Sectional Online Survey

    PubMed Central

    King, Catherine; Subotic-Kerry, Mirjana; O'Moore, Kathleen; Christensen, Helen

    2017-01-01

    Background Mental health problems are common among youth in high school, and school counselors play a key role in the provision of school-based mental health care. However, school counselors occupy a multispecialist position that makes it difficult for them to provide care to all of those who are in need in a timely manner. A Web-based mental health service that offers screening, psychological therapy, and monitoring may help counselors manage time and provide additional oversight to students. However, for such a model to be implemented successfully, school counselors’ attitudes toward Web-based resources and services need to be measured. Objective This study aimed to examine the acceptability of a proposed Web-based mental health service, the feasibility of providing this type of service in the school context, and the barriers and facilitators to implementation as perceived by school counselors in New South Wales (NSW), Australia. Methods This study utilized an online cross-sectional survey to measure school counselors’ perspectives. Results A total of 145 school counselors completed the survey. Overall, 82.1% (119/145) thought that the proposed service would be helpful to students. One-third reported that they would recommend the proposed model, with the remaining reporting potential concerns. Years of experience was the only background factor associated with a higher level of comfort with the proposed service (P=.048). Personal beliefs, knowledge and awareness, Internet accessibility, privacy, and confidentiality were found to influence, both positively and negatively, the likelihood of school counselors implementing a Web-based school mental health service. Conclusions The findings of this study confirmed that greater support and resources are needed to facilitate what is already a challenging and emotionally demanding role for school counselors. Although the school counselors in this study were open to the proposed service model, successful implementation

  2. FROG: Time Series Analysis for the Web Service Era

    NASA Astrophysics Data System (ADS)

    Allan, A.

    2005-12-01

    The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).

  3. Web-Browsing Competencies of Pre-Service Adult Facilitators: Implications for Curriculum Transformation and Distance Learning

    ERIC Educational Resources Information Center

    Theresa, Ofoegbu; Ugwu, Agboeze Matthias; Ihebuzoaju, Anyanwu Joy; Uche, Asogwa

    2013-01-01

    The study investigated the Web-browsing competencies of pre-service adult facilitators in the southeast geopolitical zone of Nigeria. Survey design was adopted for the study. The population consists of all pre-service adult facilitators in all the federal universities in the southeast geopolitical zone of Nigeria. Accidental sampling technique was…

  4. Exploring Pre-Service Teachers' Beliefs about Using Web 2.0 Technologies in K-12 Classroom

    ERIC Educational Resources Information Center

    Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.

    2012-01-01

    This qualitative study explored pre-service teachers' behavioral, normative, and control beliefs regarding their intentions to use Web 2.0 technologies in their future classrooms. The Theory of Planned Behavior (TPB) was used as the theoretical framework (Ajzen, 1991) to understand these beliefs and pre-service teachers' intentions for why they…

  5. The relation between media promotions and service volume for a statewide tobacco quitline and a web-based cessation program

    PubMed Central

    2011-01-01

    Background This observational study assessed the relation between mass media campaigns and service volume for a statewide tobacco cessation quitline and stand-alone web-based cessation program. Methods Multivariate regression analysis was used to identify how weekly calls to a cessation quitline and weekly registrations to a web-based cessation program are related to levels of broadcast media, media campaigns, and media types, controlling for the impact of external and earned media events. Results There was a positive relation between weekly broadcast targeted rating points and the number of weekly calls to a cessation quitline and the number of weekly registrations to a web-based cessation program. Additionally, print secondhand smoke ads and online cessation ads were positively related to weekly quitline calls. Television and radio cessation ads and radio smoke-free law ads were positively related to web program registration levels. There was a positive relation between the number of web registrations and the number of calls to the cessation quitline, with increases in registrations to the web in 1 week corresponding to increases in calls to the quitline in the subsequent week. Web program registration levels were more highly influenced by earned media and other external events than were quitline call volumes. Conclusion Overall, broadcast advertising had a greater impact on registrations for the web program than calls to the quitline. Furthermore, registrations for the web program influenced calls to the quitline. These two findings suggest the evolving roles of web-based cessation programs and Internet-use practices should be considered when creating cessation programs and media campaigns to promote them. Additionally, because different types of media and campaigns were positively associated with calls to the quitline and web registrations, developing mass media campaigns that offer a variety of messages and communicate through different types of media to

  6. The relation between media promotions and service volume for a statewide tobacco quitline and a web-based cessation program.

    PubMed

    Schillo, Barbara A; Mowery, Andrea; Greenseid, Lija O; Luxenberg, Michael G; Zieffler, Andrew; Christenson, Matthew; Boyle, Raymond G

    2011-12-16

    This observational study assessed the relation between mass media campaigns and service volume for a statewide tobacco cessation quitline and stand-alone web-based cessation program. Multivariate regression analysis was used to identify how weekly calls to a cessation quitline and weekly registrations to a web-based cessation program are related to levels of broadcast media, media campaigns, and media types, controlling for the impact of external and earned media events. There was a positive relation between weekly broadcast targeted rating points and the number of weekly calls to a cessation quitline and the number of weekly registrations to a web-based cessation program. Additionally, print secondhand smoke ads and online cessation ads were positively related to weekly quitline calls. Television and radio cessation ads and radio smoke-free law ads were positively related to web program registration levels. There was a positive relation between the number of web registrations and the number of calls to the cessation quitline, with increases in registrations to the web in 1 week corresponding to increases in calls to the quitline in the subsequent week. Web program registration levels were more highly influenced by earned media and other external events than were quitline call volumes. Overall, broadcast advertising had a greater impact on registrations for the web program than calls to the quitline. Furthermore, registrations for the web program influenced calls to the quitline. These two findings suggest the evolving roles of web-based cessation programs and Internet-use practices should be considered when creating cessation programs and media campaigns to promote them. Additionally, because different types of media and campaigns were positively associated with calls to the quitline and web registrations, developing mass media campaigns that offer a variety of messages and communicate through different types of media to motivate tobacco users to seek services

  7. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification.

    PubMed

    Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and

  8. The Amazon Region; A Vision of Sovereignty

    DTIC Science & Technology

    1998-04-06

    and SPOT remote sensing satellites images, about 90% of the Amazon jungle remains almost untouched9. This 280 million hectares of vegetation hold...increasing energy needs, remain unanswered. Indian rights Has the Indian population been jeopardized by the development of the Amazon Region...or government agency. STRATEGY RESEARCH PROJECT THE AMAZON REGION; A VISION OF SOVEREIGNTY BY LIEUTENANT COLONEL EDUARDO JOSE BARBOSA

  9. Oh! Web 2.0, Virtual Reference Service 2.0, Tools and Techniques (I): A Basic Approach

    ERIC Educational Resources Information Center

    Arya, Harsh Bardhan; Mishra, J. K.

    2011-01-01

    This study targets librarians and information professionals who use Web 2.0 tools and applications with a view to providing snapshots on how Web 2.0 technologies are used. It also aims to identify values and impact that such tools have exerted on libraries and their services, as well as to detect various issues associated with the implementation…

  10. Public transparency Web sites for radiology practices: prevalence of price, clinical quality, and service quality information.

    PubMed

    Rosenkrantz, Andrew B; Doshi, Ankur M

    2016-01-01

    To assess information regarding radiology practices on public transparency Web sites. Eight Web sites comparing radiology centers' price and quality were identified. Web site content was assessed. Six of eight Web sites reported examination prices. Other reported information included hours of operation (4/8), patient satisfaction (2/8), American College of Radiology (ACR) accreditation (3/8), on-site radiologists (2/8), as well as parking, accessibility, waiting area amenities, same/next-day reports, mammography follow-up rates, examination appropriateness, radiation dose, fellowship-trained radiologists, and advanced technologies (1/8 each). Transparency Web sites had a preponderance of price (and to a lesser extent service quality) information, risking fostering price-based competition at the expense of clinical quality. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Pitfalls in Persuasion: How Do Users Experience Persuasive Techniques in a Web Service?

    NASA Astrophysics Data System (ADS)

    Segerståhl, Katarina; Kotro, Tanja; Väänänen-Vainio-Mattila, Kaisa

    Persuasive technologies are designed by utilizing a variety of interactive techniques that are believed to promote target behaviors. This paper describes a field study in which the aim was to discover possible pitfalls of persuasion, i.e., situations in which persuasive techniques do not function as expected. The study investigated persuasive functionality of a web service targeting weight loss. A qualitative online questionnaire was distributed through the web service and a total of 291 responses were extracted for interpretative analysis. The Persuasive Systems Design model (PSD) was used for supporting systematic analysis of persuasive functionality. Pitfalls were identified through situations that evoked negative user experiences. The primary pitfalls discovered were associated with manual logging of eating and exercise behaviors, appropriateness of suggestions and source credibility issues related to social facilitation. These pitfalls, when recognized, can be addressed in design by applying functional and facilitative persuasive techniques in meaningful combinations.

  12. Evaluation of flood hazard maps in print and web mapping services as information tools in flood risk communication

    NASA Astrophysics Data System (ADS)

    Hagemeier-Klose, M.; Wagner, K.

    2009-04-01

    Flood risk communication with the general public and the population at risk is getting increasingly important for flood risk management, especially as a precautionary measure. This is also underlined by the EU Flood Directive. The flood related authorities therefore have to develop adjusted information tools which meet the demands of different user groups. This article presents the formative evaluation of flood hazard maps and web mapping services according to the specific requirements and needs of the general public using the dynamic-transactional approach as a theoretical framework. The evaluation was done by a mixture of different methods; an analysis of existing tools, a creative workshop with experts and laymen and an online survey. The currently existing flood hazard maps or web mapping services or web GIS still lack a good balance between simplicity and complexity with adequate readability and usability for the public. Well designed and associative maps (e.g. using blue colours for water depths) which can be compared with past local flood events and which can create empathy in viewers, can help to raise awareness, to heighten the activity and knowledge level or can lead to further information seeking. Concerning web mapping services, a linkage between general flood information like flood extents of different scenarios and corresponding water depths and real time information like gauge levels is an important demand by users. Gauge levels of these scenarios are easier to understand than the scientifically correct return periods or annualities. The recently developed Bavarian web mapping service tries to integrate these requirements.

  13. Geochemistry of the Amazon Estuary

    USGS Publications Warehouse

    Smoak, Joseph M.; Krest, James M.; Swarzenski, Peter W

    2006-01-01

    The Amazon River supplies more freshwater to the ocean than any other river in the world. This enormous volume of freshwater forces the estuarine mixing out of the river channel and onto the continental shelf. On the continental shelf, the estuarine mixing occurs in a very dynamic environment unlike that of a typical estuary. The tides, the wind, and the boundary current that sweeps the continental shelf have a pronounced influence on the chemical and biological processes occurring within the estuary. The dynamic environment, along with the enormous supply of water, solutes and particles makes the Amazon estuary unique. This chapter describes the unique features of the Amazon estuary and how these features influence the processes occurring within the estuary. Examined are the supply and cycling of major and minor elements, and the use of naturally occurring radionuclides to trace processes including water movement, scavenging, sediment-water interaction, and sediment accumulation rates. The biogeochemical cycling of carbon, nitrogen, and phosphorus, and the significances of the Amazon estuary in the global mass balance of these elements are examined.

  14. Cloud/web mapping and geoprocessing services - Intelligently linking geoinformation

    NASA Astrophysics Data System (ADS)

    Veenendaal, Bert; Brovelli, Maria Antonia; Wu, Lixin

    2016-04-01

    We live in a world that is alive with information and geographies. "Everything happens somewhere" (Tosta, 2001). This reality is being exposed in the digital earth technologies providing a multi-dimensional, multi-temporal and multi-resolution model of the planet, based on the needs of diverse actors: from scientists to decision makers, communities and citizens (Brovelli et al., 2015). We are building up a geospatial information infrastructure updated in real time thanks to mobile, positioning and sensor observations. Users can navigate, not only through space but also through time, to access historical data and future predictions based on social and/or environmental models. But how do we find the information about certain geographic locations or localities when it is scattered in the cloud and across the web of data behind a diversity of databases, web services and hyperlinked pages? We need to be able to link geoinformation together in order to integrate it, make sense of it, and use it appropriately for managing the world and making decisions.

  15. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    NASA Astrophysics Data System (ADS)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    Geo-hazards and their effects are distributed geographically over wide regions. The effective mapping and monitoring is essential for hazard assessment and mitigation. It is often best achieved using satellite imagery and new object-based image analysis approaches to identify and delineate geo-hazard objects (landslides, floods, forest fires, storm damages, etc.). At the moment, several local/national databases and platforms provide and publish data of different types of geo-hazards as well as web-based risk maps and decision support systems. Also, the European commission implemented the Copernicus Emergency Management Service (EMS) in 2015 that publishes information about natural and man-made disasters and risks. Currently, no platform for landslides or geo-hazards as such exists that enables the integration of the user in the mapping and monitoring process. In this study we introduce the concept of a spatial data infrastructure for object delineation, web-processing and service provision of landslide information with the focus on user interaction in all processes. A first prototype for the processing and mapping of landslides in Austria and Italy has been developed within the project Land@Slide, funded by the Austrian Research Promotion Agency FFG in the Austrian Space Applications Program ASAP. The spatial data infrastructure and its services for the mapping, processing and analysis of landslides can be extended to other regions and to all types of geo-hazards for analysis and delineation based on Earth Observation (EO) data. The architecture of the first prototypical spatial data infrastructure includes four main areas of technical components. The data tier consists of a file storage system and the spatial data catalogue for the management of EO-data, other geospatial data on geo-hazards, as well as descriptions and protocols for the data processing and analysis. An interface to extend the data integration from external sources (e.g. Sentinel-2 data) is planned

  16. Apollo: Giving application developers a single point of access to public health models using structured vocabularies and Web services

    PubMed Central

    Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417

  17. Apollo: giving application developers a single point of access to public health models using structured vocabularies and Web services.

    PubMed

    Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh

    2013-01-01

    This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.

  18. EnviroAtlas - Austin, TX - Tree Cover Configuration and Connectivity, Water Background Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The EnviroAtlas Austin, TX tree cover configuration and connectivity map categorizes forest land cover into structural elements (e.g. core, edge, connector, etc.). In this community, Forest is defined as Trees & Forest (Trees & Forest - 40 = 1; All Else = 0). Water was considered background (value 129) during the analysis to create this dataset, however it has been converted into value 10 to distinguish it from land area background. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).

  19. Data Access and Web Services at the EarthScope Plate Boundary Observatory

    NASA Astrophysics Data System (ADS)

    Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.

    2007-12-01

    The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.

  20. Measuring Water Storage in the Amazon

    NASA Image and Video Library

    2010-07-07

    This image is from data taken by NASA Gravity Recovery and Climate Experiment showing the Amazon basin in South America. The amount of water stored in the Amazon basin varies from month to month. Animations are available at the Photojournal.