SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services
Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-01-01
Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs. PMID:19775460
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894
SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.
Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-09-23
SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.
BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
2015-06-02
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Enhancing the AliEn Web Service Authentication
NASA Astrophysics Data System (ADS)
Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping
2011-12-01
Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.
Opal web services for biomedical applications.
Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W
2010-07-01
Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.
Accessing the SEED genome databases via Web services API: tools for programmers.
Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A
2010-06-14
The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
Adding Processing Functionality to the Sensor Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pross, Benjamin; Jirka, Simon; Gräler, Benedikt
2017-04-01
The Sensor Web allows discovering, accessing and tasking different kinds of environmental sensors in the Web, ranging from simple in-situ sensors to remote sensing systems. However, (geo-)processing functionality needs to be applied to integrate data from different sensor sources and to generate higher level information products. Yet, a common standardized approach for processing sensor data in the Sensor Web is still missing and the integration differs from application to application. Standardizing not only the provision of sensor data, but also the processing facilitates sharing and re-use of processing modules, enables reproducibility of processing results, and provides a common way to integrate external scalable processing facilities or legacy software. In this presentation, we provide an overview on on-going research projects that develop concepts for coupling standardized geoprocessing technologies with Sensor Web technologies. At first, different architectures for coupling sensor data services with geoprocessing services are presented. Afterwards, profiles for linear regression and spatio-temporal interpolation of the OGC Web Processing Services that allow consuming sensor data coming from and uploading predictions to Sensor Observation Services are introduced. The profiles are implemented in processing services for the hydrological domain. Finally, we illustrate how the R software can be coupled with existing OGC Sensor Web and Geoprocessing Services and present an example, how a Web app can be built that allows exploring the results of environmental models in an interactive way using the R Shiny framework. All of the software presented is available as Open Source Software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-08-21
NREL's Developer Network, developer.nrel.gov, provides data that users can access to provide data to their own analyses, mobile and web applications. Developers can retrieve the data through a Web services API (application programming interface). The Developer Network handles overhead of serving up web services such as key management, authentication, analytics, reporting, documentation standards, and throttling in a common architecture, while allowing web services and APIs to be maintained and managed independently.
REMORA: a pilot in the ocean of BioMoby web-services.
Carrere, Sébastien; Gouzy, Jérôme
2006-04-01
Emerging web-services technology allows interoperability between multiple distributed architectures. Here, we present REMORA, a web server implemented according to the BioMoby web-service specifications, providing life science researchers with an easy-to-use workflow generator and launcher, a repository of predefined workflows and a survey system. Jerome.Gouzy@toulouse.inra.fr The REMORA web server is freely available at http://bioinfo.genopole-toulouse.prd.fr/remora, sources are available upon request from the authors.
Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav
2016-08-01
The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.
2010-12-01
Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.
Web services in the U.S. geological survey streamstats web application
Guthrie, J.D.; Dartiguenave, C.; Ries, Kernell G.
2009-01-01
StreamStats is a U.S. Geological Survey Web-based GIS application developed as a tool for waterresources planning and management, engineering design, and other applications. StreamStats' primary functionality allows users to obtain drainage-basin boundaries, basin characteristics, and streamflow statistics for gaged and ungaged sites. Recently, Web services have been developed that provide the capability to remote users and applications to access comprehensive GIS tools that are available in StreamStats, including delineating drainage-basin boundaries, computing basin characteristics, estimating streamflow statistics for user-selected locations, and determining point features that coincide with a National Hydrography Dataset (NHD) reach address. For the state of Kentucky, a web service also has been developed that provides users the ability to estimate daily time series of drainage-basin average values of daily precipitation and temperature. The use of web services allows the user to take full advantage of the datasets and processes behind the Stream Stats application without having to develop and maintain them. ?? 2009 IEEE.
The impact of web services at the IRIS DMC
NASA Astrophysics Data System (ADS)
Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.
2015-12-01
The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability will leverage the DMC's MUSTANG project to select data based on data quality measurements. Within five years, the DMC's web services have proven to be a robust and flexible platform that enables continued growth for the DMC. We expect continued enhancements and adoption of web services.
Security and Efficiency Concerns With Distributed Collaborative Networking Environments
2003-09-01
have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while
Using EMBL-EBI services via Web interface and programmatically via Web Services
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2015-01-01
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941
Web Services and Other Enhancements at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.
2012-12-01
The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These data can be accessed through the above web services and through special NCEDC web pages.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis.
Gordon, Paul M K; Sensen, Christoph W
2007-06-18
Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
Gordon, Paul MK; Sensen, Christoph W
2007-01-01
Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
Enhancing UCSF Chimera through web services
Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.
2014-01-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624
Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2014-12-12
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.
Automated geospatial Web Services composition based on geodata quality requirements
NASA Astrophysics Data System (ADS)
Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael
2012-10-01
Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.
Enhancing UCSF Chimera through web services.
Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E
2014-07-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
BioCatalogue: a universal catalogue of web services for the life sciences
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.
2010-01-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378
BioCatalogue: a universal catalogue of web services for the life sciences.
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A
2010-07-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.
Data Mining Web Services for Science Data Repositories
NASA Astrophysics Data System (ADS)
Graves, S.; Ramachandran, R.; Keiser, K.; Maskey, M.; Lynnes, C.; Pham, L.
2006-12-01
The maturation of web services standards and technologies sets the stage for a distributed "Service-Oriented Architecture" (SOA) for NASA's next generation science data processing. This architecture will allow members of the scientific community to create and combine persistent distributed data processing services and make them available to other users over the Internet. NASA has initiated a project to create a suite of specialized data mining web services designed specifically for science data. The project leverages the Algorithm Development and Mining (ADaM) toolkit as its basis. The ADaM toolkit is a robust, mature and freely available science data mining toolkit that is being used by several research organizations and educational institutions worldwide. These mining services will give the scientific community a powerful and versatile data mining capability that can be used to create higher order products such as thematic maps from current and future NASA satellite data records with methods that are not currently available. The package of mining and related services are being developed using Web Services standards so that community-based measurement processing systems can access and interoperate with them. These standards-based services allow users different options for utilizing them, from direct remote invocation by a client application to deployment of a Business Process Execution Language (BPEL) solutions package where a complex data mining workflow is exposed to others as a single service. The ability to deploy and operate these services at a data archive allows the data mining algorithms to be run where the data are stored, a more efficient scenario than moving large amounts of data over the network. This will be demonstrated in a scenario in which a user uses a remote Web-Service-enabled clustering algorithm to create cloud masks from satellite imagery at the Goddard Earth Sciences Data and Information Services Center (GES DISC).
NASA Astrophysics Data System (ADS)
Wibonele, Kasanda J.; Zhang, Yanqing
2002-03-01
A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.
Persistent identifiers for web service requests relying on a provenance ontology design pattern
NASA Astrophysics Data System (ADS)
Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei
2016-04-01
Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems at Australia's National Computing Infrastructure and show how one can 'click' through provenance interfaces to see the dual representations of the requests using provenance management tooling we have built.
A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services
ERIC Educational Resources Information Center
Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe
2007-01-01
In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…
Analysis Tool Web Services from the EMBL-EBI.
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-07-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.
Analysis Tool Web Services from the EMBL-EBI
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-01-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338
Identifying Experts and Authoritative Documents in Social Bookmarking Systems
ERIC Educational Resources Information Center
Grady, Jonathan P.
2013-01-01
Social bookmarking systems allow people to create pointers to Web resources in a shared, Web-based environment. These services allow users to add free-text labels, or "tags", to their bookmarks as a way to organize resources for later recall. Ease-of-use, low cognitive barriers, and a lack of controlled vocabulary have allowed social…
Scalable web services for the PSIPRED Protein Analysis Workbench.
Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T
2013-07-01
Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
Data as a Service: A Seismic Web Service Pipeline
NASA Astrophysics Data System (ADS)
Martinez, E.
2016-12-01
Publishing data as a service pipeline provides an improved, dynamic approach over static data archives. A service pipeline is a collection of micro web services that each perform a specific task and expose the results of that task. Structured request/response formats allow micro web services to be chained together into a service pipeline to provide more complex results. The U.S. Geological Survey adopted service pipelines to publish seismic hazard and design data supporting both specific and generalized audiences. The seismic web service pipeline starts at source data and exposes probability and deterministic hazard curves, response spectra, risk-targeted ground motions, and seismic design provision metadata. This pipeline supports public/private organizations and individual engineers/researchers. Publishing data as a service pipeline provides a variety of benefits. Exposing the component services enables advanced users to inspect or use the data at each processing step. Exposing a composite service enables new users quick access to published data with a very low barrier to entry. Advanced users may re-use micro web services by chaining them in new ways or injecting new micros services into the pipeline. This allows the user to test hypothesis and compare their results to published results. Exposing data at each step in the pipeline enables users to review and validate the data and process more quickly and accurately. Making the source code open source, per USGS policy, further enables this transparency. Each micro service may be scaled independent of any other micro service. This ensures data remains available and timely in a cost-effective manner regardless of load. Additionally, if a new or more efficient approach to processing the data is discovered, this new approach may replace the old approach at any time, keeping the pipeline running while not affecting other micro services.
NASA Astrophysics Data System (ADS)
Ahern, T. K.; Barga, R.; Casey, R.; Kamb, L.; Parastatidis, S.; Stromme, S.; Weertman, B. T.
2008-12-01
While mature methods of accessing seismic data from the IRIS DMC have existed for decades, the demands for improved interdisciplinary data integration call for new approaches. Talented software teams at the IRIS DMC, UNAVCO and the ICDP in Germany, have been developing web services for all EarthScope data including data from USArray, PBO and SAFOD. These web services are based upon SOAP and WSDL. The EarthScope Data Portal was the first external system to access data holdings from the IRIS DMC using Web Services. EarthScope will also draw more heavily upon products to aid in cross-disciplinary data reuse. A Product Management System called SPADE allows archive of and access to heterogeneous data products, presented as XML documents, at the IRIS DMC. Searchable metadata are extracted from the XML and enable powerful searches for products from EarthScope and other data sources. IRIS is teaming with the External Research Group at Microsoft Research to leverage a powerful Scientific Workflow Engine (Trident) and interact with the web services developed at centers such as IRIS to enable access to data services as well as computational services. We believe that this approach will allow web- based control of workflows and the invocation of computational services that transform data. This capability will greatly improve access to data across scientific disciplines. This presentation will review some of the traditional access tools as well as many of the newer approaches that use web services, scientific workflow to improve interdisciplinary data access.
Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José
2015-01-01
This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production. PMID:26068216
Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José
2015-01-01
This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.
Tools for Integrating Data Access from the IRIS DMC into Research Workflows
NASA Astrophysics Data System (ADS)
Reyes, C. G.; Suleiman, Y. Y.; Trabant, C.; Karstens, R.; Weertman, B. R.
2012-12-01
Web service interfaces at the IRIS Data Management Center (DMC) provide access to a vast archive of seismological and related geophysical data. These interfaces are designed to easily incorporate data access into data processing workflows. Examples of data that may be accessed include: time series data, related metadata, and earthquake information. The DMC has developed command line scripts, MATLAB® interfaces and a Java library to support a wide variety of data access needs. Users of these interfaces do not need to concern themselves with web service details, networking, or even (in most cases) data conversion. Fetch scripts allow access to the DMC archive and are a comfortable fit for command line users. These scripts are written in Perl and are well suited for automation and integration into existing workflows on most operating systems. For metdata and event information, the Fetch scripts even parse the returned data into simple text summaries. The IRIS Java Web Services Library (IRIS-WS Library) allows Java developers the ability to create programs that access the DMC archives seamlessly. By returning the data and information as native Java objects the Library insulates the developer from data formats, network programming and web service details. The MATLAB interfaces leverage this library to allow users access to the DMC archive directly from within MATLAB (r2009b or newer), returning data into variables for immediate use. Data users and research groups are developing other toolkits that use the DMC's web services. Notably, the ObsPy framework developed at LMU Munich is a Python Toolbox that allows seamless access to data and information via the DMC services. Another example is the MATLAB-based GISMO and Waveform Suite developments that can now access data via web services. In summary, there now exist a host of ways that researchers can bring IRIS DMC data directly into their workflows. MATLAB users can use irisFetch.m, command line users can use the various Fetch scripts, Java users can use the IRIS-WS library, and Python users may request data through ObsPy. To learn more about any of these clients see http://www.iris.edu/ws/wsclients/.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... DEPARTMENT OF DEFENSE Office of the Secretary Termination of the Department of Defense Web-Based..., entitled Web-Based TRICARE Assistance Program (TRIAP). The demonstration project uses existing health care support contracts (HCSC) to allow web-based behavioral health and related services including non-medical...
Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon
2012-01-01
A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.
Applications of Dynamic Deployment of Services in Industrial Automation
NASA Astrophysics Data System (ADS)
Candido, Gonçalo; Barata, José; Jammes, François; Colombo, Armando W.
Service-oriented Architecture (SOA) is becoming a de facto paradigm for business and enterprise integration. SOA is expanding into several domains of application envisioning a unified solution suitable across all different layers of an enterprise infrastructure. The application of SOA based on open web standards can significantly enhance the interoperability and openness of those devices. By embedding a dynamical deployment service even into small field de- vices, it would be either possible to allow machine builders to place built- in services and still allow the integrator to deploy on-the-run the services that best fit his current application. This approach allows the developer to keep his own preferred development language, but still deliver a SOA- compliant application. A dynamic deployment service is envisaged as a fundamental framework to support more complex applications, reducing deployment delays, while increasing overall system agility. As use-case scenario, a dynamic deployment service was implemented over DPWS and WS-Management specifications allowing designing and programming an automation application using IEC61131 languages, and deploying these components as web services into devices.
ASON: An OWL-S based ontology for astrophysical services
NASA Astrophysics Data System (ADS)
Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.
2018-07-01
Modern astrophysics heavily relies on Web services to expose most of the data coming from many different instruments and researches worldwide. The virtual observatory (VO) has been designed to allow scientists to locate, retrieve and analyze useful information among those heterogeneous data. The use of ontologies has been studied in the VO context for astrophysical concerns like object types or astrophysical services subjects. On the operative point of view, ontological description of astrophysical services for interoperability and querying still has to be considered. In this paper, we design a global ontology (Astrophysical Services ONtology, ASON) based on web Ontology Language for Services (OWL-S) to enhance existing astrophysical services description. By expressing together VO specific and non-VO specific services design, it will improve the automation of services queries and allow automatic composition of heterogeneous astrophysical services.
EnviroAtlas - Recreation, Culture, and Aesthetics Metrics for Conterminous United States
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The Recreation, Culture, and Aesthetics category in this web service includes layers illustrating the ecosystems and natural resources that provide inherent cultural and aesthetic value or recreation opportunity, the need or demand for these amenities, the impacts associated with their presence and accessibility, and factors that place stress on the natural environment's capability to provide these benefits. EnviroAtlas allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the conterminous United States. Additional descriptive information about each attribute in this web service is located within each web service layer (see Full Metadata hyperlink) or can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Graph and Network for Model Elicitation (GNOME Phase 2)
2013-02-01
10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a
SWS: accessing SRS sites contents through Web Services.
Romano, Paolo; Marra, Domenico
2008-03-26
Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.
Service-based analysis of biological pathways
Zheng, George; Bouguettaya, Athman
2009-01-01
Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403
Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.
Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell
2011-07-26
Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.
EnviroAtlas National Layers Master Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes layers depicting EnviroAtlas national metrics mapped at the 12-digit HUC within the conterminous United States. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
MedlinePlus Milestones: 1998-present
... page links and information daily and also offers access to this full XML content through its Web ... search-based Web service that allows developers to access MedlinePlus health topic data in XML format. MedlinePlus ...
NASA Astrophysics Data System (ADS)
Walker, J. I.; Blodgett, D. L.; Suftin, I.; Kunicki, T.
2013-12-01
High-resolution data for use in environmental modeling is increasingly becoming available at broad spatial and temporal scales. Downscaled climate projections, remotely sensed landscape parameters, and land-use/land-cover projections are examples of datasets that may exceed an individual investigation's data management and analysis capacity. To allow projects on limited budgets to work with many of these data sets, the burden of working with them must be reduced. The approach being pursued at the U.S. Geological Survey Center for Integrated Data Analytics uses standard self-describing web services that allow machine to machine data access and manipulation. These techniques have been implemented and deployed in production level server-based Web Processing Services that can be accessed from a web application or scripted workflow. Data publication techniques that allow machine-interpretation of large collections of data have also been implemented for numerous datasets at U.S. Geological Survey data centers as well as partner agencies and academic institutions. Discovery of data services is accomplished using a method in which a machine-generated metadata record holds content--derived from the data's source web service--that is intended for human interpretation as well as machine interpretation. A distributed search application has been developed that demonstrates the utility of a decentralized search of data-owner metadata catalogs from multiple agencies. The integrated but decentralized system of metadata, data, and server-based processing capabilities will be presented. The design, utility, and value of these solutions will be illustrated with applied science examples and success stories. Datasets such as the EPA's Integrated Climate and Land Use Scenarios, USGS/NASA MODIS derived land cover attributes, and downscaled climate projections from several sources are examples of data this system includes. These and other datasets, have been published as standard, self-describing, web services that provide the ability to inspect and subset the data. This presentation will demonstrate this file-to-web service concept and how it can be used from script-based workflows or web applications.
A new reference implementation of the PSICQUIC web service.
del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Jimenez, Rafael C; Galeota, Eugenia; Launay, Guillaume; Goll, Johannes; Breuer, Karin; Ono, Keiichiro; Salwinski, Lukasz; Hermjakob, Henning
2013-07-01
The Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) specification was created by the Human Proteome Organization Proteomics Standards Initiative (HUPO-PSI) to enable computational access to molecular-interaction data resources by means of a standard Web Service and query language. Currently providing >150 million binary interaction evidences from 28 servers globally, the PSICQUIC interface allows the concurrent search of multiple molecular-interaction information resources using a single query. Here, we present an extension of the PSICQUIC specification (version 1.3), which has been released to be compliant with the enhanced standards in molecular interactions. The new release also includes a new reference implementation of the PSICQUIC server available to the data providers. It offers augmented web service capabilities and improves the user experience. PSICQUIC has been running for almost 5 years, with a user base growing from only 4 data providers to 28 (April 2013) allowing access to 151 310 109 binary interactions. The power of this web service is shown in PSICQUIC View web application, an example of how to simultaneously query, browse and download results from the different PSICQUIC servers. This application is free and open to all users with no login requirement (http://www.ebi.ac.uk/Tools/webservices/psicquic/view/main.xhtml).
MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.
Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G
2012-12-07
MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... adopt the new QView service. QView is a web-based, front-end application, which provides a subscribing... orders and executions provided in the QView dashboard interface. The dashboard also allows a QView... trading activity.\\5\\ \\4\\ TradeInfo PSX is a web-based tool that, among other things, allows users access...
Enabling Real-time Water Decision Support Services Using Model as a Service
NASA Astrophysics Data System (ADS)
Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.
2014-12-01
Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.
FRS exposes several REST services that allows developers to utilize a live feed of data from the FRS database. This web page is intended for a technical audience and describes the content and purpose of each service available.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
ERIC Educational Resources Information Center
McGladrey, Margaret; Noar, Seth; Crosby, Richard; Young, April; Webb, Elizabeth
2012-01-01
Background: This paper discusses the Rural Center for AIDS/STD Prevention's effort to develop a web-based service called Project CREATE that responds to a need for targeted health promotion materials expressed by directors of HIV/STD prevention services in predominately rural states. Purpose: Project CREATE allows users to select customized…
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface.
Jeliazkova, Nina; Jeliazkov, Vedrin
2011-05-16
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface
2011-01-01
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems. PMID:21575202
Job submission and management through web services: the experience with the CREAM service
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.
2008-07-01
Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.
US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access
NASA Astrophysics Data System (ADS)
Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.
2012-04-01
The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.
NASA Astrophysics Data System (ADS)
Gray, A. J. G.; Gray, N.; Ounis, I.
2009-09-01
There are multiple vocabularies and thesauri within astronomy, of which the best known are the 1993 IAU Thesaurus and the keyword list maintained by A&A, ApJ and MNRAS. The IVOA has agreed on a standard for publishing vocabularies, based on the W3C skos standard, to allow greater automated interaction with them, in particular on the Web. This allows links with the Semantic Web and looks forward to richer applications using the technologies of that domain. Vocabulary-aware applications can benefit from improvements in both precision and recall when searching for bibliographic or science data, and lightweight intelligent filtering for services such as VOEvent streams. In this paper we present two applications, the Vocabulary Explorer and its companion the Mapping Editor, which have been developed to support the use of vocabularies in the Virtual Observatory. These combine Semantic Web and Information Retrieval technologies to illustrate the way in which formal vocabularies might be used in a practical application, provide an online service which will allow astronomers to explore and relate existing vocabularies, and provide a service which translates free text user queries into vocabulary terms.
Web-GIS visualisation of permafrost-related Remote Sensing products for ESA GlobPermafrost
NASA Astrophysics Data System (ADS)
Haas, A.; Heim, B.; Schaefer-Neth, C.; Laboor, S.; Nitze, I.; Grosse, G.; Bartsch, A.; Kaab, A.; Strozzi, T.; Wiesmann, A.; Seifert, F. M.
2016-12-01
The ESA GlobPermafrost (www.globpermafrost.info) provides a remote sensing service for permafrost research and applications. The service comprises of data product generation for various sites and regions as well as specific infrastructure allowing overview and access to datasets. Based on an online user survey conducted within the project, the user community extensively applies GIS software to handle remote sensing-derived datasets and requires preview functionalities before accessing them. In response, we develop the Permafrost Information System PerSys which is conceptualized as an open access geospatial data dissemination and visualization portal. PerSys will allow visualisation of GlobPermafrost raster and vector products such as land cover classifications, Landsat multispectral index trend datasets, lake and wetland extents, InSAR-based land surface deformation maps, rock glacier velocity fields, spatially distributed permafrost model outputs, and land surface temperature datasets. The datasets will be published as WebGIS services relying on OGC-standardized Web Mapping Service (WMS) and Web Feature Service (WFS) technologies for data display and visualization. The WebGIS environment will be hosted at the AWI computing centre where a geodata infrastructure has been implemented comprising of ArcGIS for Server 10.4, PostgreSQL 9.2 and a browser-driven data viewer based on Leaflet (http://leafletjs.com). Independently, we will provide an `Access - Restricted Data Dissemination Service', which will be available to registered users for testing frequently updated versions of project datasets. PerSys will become a core project of the Arctic Permafrost Geospatial Centre (APGC) within the ERC-funded PETA-CARB project (www.awi.de/petacarb). The APGC Data Catalogue will contain all final products of GlobPermafrost, allow in-depth dataset search via keywords, spatial and temporal coverage, data type, etc., and will provide DOI-based links to the datasets archived in the long-term, open access PANGAEA data repository.
NASA Astrophysics Data System (ADS)
Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.
2017-11-01
The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lurie, Gordon
2007-01-02
The cell phone software allows any Java enabled cell phone to view sensor and meteorological data via an internet connection using a secure connection to the CB-EMIS Web Service. Users with appropriate privileges can monitor the state of the sensors and perform simple maintenance tasks remotely. All sensitive data is downloaded from the web service, thus protecting sensitive data in the event a cell phone is lost.
Taverna: a tool for building and running workflows of services
Hull, Duncan; Wolstencroft, Katy; Stevens, Robert; Goble, Carole; Pocock, Mathew R.; Li, Peter; Oinn, Tom
2006-01-01
Taverna is an application that eases the use and integration of the growing number of molecular biology tools and databases available on the web, especially web services. It allows bioinformaticians to construct workflows or pipelines of services to perform a range of different analyses, such as sequence analysis and genome annotation. These high-level workflows can integrate many different resources into a single analysis. Taverna is available freely under the terms of the GNU Lesser General Public License (LGPL) from . PMID:16845108
EnviroAtlas - NHDPlus V2 Hydrologic Unit Boundaries Web Service - Conterminous United States
This EnviroAtlas web service contains layers depicting hydrologic unit boundary layers and labels for the Subregion level (4-digit HUCs), Subbasin level (8-digit HUCs), and Subwatershed level (12-digit HUCs) for the conterminous United States. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Generic Service Integration in Adaptive Learning Experiences Using IMS Learning Design
ERIC Educational Resources Information Center
de-la-Fuente-Valentin, Luis; Pardo, Abelardo; Kloos, Carlos Delgado
2011-01-01
IMS Learning Design is a specification to capture the orchestration taking place in a learning scenario. This paper presents an extension called Generic Service Integration. This paradigm allows a bidirectional communication between the course engine in charge of the orchestration and conventional Web 2.0 tools. This communication allows the…
EarthServer: Use of Rasdaman as a data store for use in visualisation of complex EO data
NASA Astrophysics Data System (ADS)
Clements, Oliver; Walker, Peter; Grant, Mike
2013-04-01
The European Commission FP7 project EarthServer is establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending cutting-edge Array Database technology. EarthServer is built around the Rasdaman Raster Data Manager which extends standard relational database systems with the ability to store and retrieve multi-dimensional raster data of unlimited size through an SQL style query language. Rasdaman facilitates visualisation of data by providing several Open Geospatial Consortium (OGC) standard interfaces through its web services wrapper, Petascope. These include the well established standards, Web Coverage Service (WCS) and Web Map Service (WMS) as well as the emerging standard, Web Coverage Processing Service (WCPS). The WCPS standard allows the running of ad-hoc queries on the data stored within Rasdaman, creating an infrastructure where users are not restricted by bandwidth when manipulating or querying huge datasets. Here we will show that the use of EarthServer technologies and infrastructure allows access and visualisation of massive scale data through a web client with only marginal bandwidth use as opposed to the current mechanism of copying huge amounts of data to create visualisations locally. For example if a user wanted to generate a plot of global average chlorophyll for a complete decade time series they would only have to download the result instead of Terabytes of data. Firstly we will present a brief overview of the capabilities of Rasdaman and the WCPS query language to introduce the ways in which it is used in a visualisation tool chain. We will show that there are several ways in which WCPS can be utilised to create both standard and novel web based visualisations. An example of a standard visualisation is the production of traditional 2d plots, allowing users the ability to plot data products easily. However, the query language allows the creation of novel/custom products, which can then immediately be plotted with the same system. For more complex multi-spectral data, WCPS allows the user to explore novel combinations of bands in standard band-ratio algorithms through a web browser with dynamic updating of the resultant image. To visualise very large datasets Rasdaman has the capability to dynamically scale a dataset or query result so that it can be appraised quickly for use in later unscaled queries. All of these techniques are accessible through a web based GIS interface increasing the number of potential users of the system. Lastly we will show the advances in dynamic web based 3D visualisations being explored within the EarthServer project. By utilising the emerging declarative 3D web standard X3DOM as a tool to visualise the results of WCPS queries we introduce several possible benefits, including quick appraisal of data for outliers or anomalous data points and visualisation of the uncertainty of data alongside the actual data values.
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Oceans 2.0: a Data Management Infrastructure as a Platform
NASA Astrophysics Data System (ADS)
Pirenne, B.; Guillemot, E.
2012-04-01
Oceans 2.0: a Data Management Infrastructure as a Platform Benoît Pirenne, Associate Director, IT, NEPTUNE Canada Eric Guillemot, Manager, Software Development, NEPTUNE Canada The Data Management and Archiving System (DMAS) serving the needs of a number of undersea observing networks such as VENUS and NEPTUNE Canada was conceived from the beginning as a Service-Oriented Infrastructure. Its core functional elements (data acquisition, transport, archiving, retrieval and processing) can interact with the outside world using Web Services. Those Web Services can be exploited by a variety of higher level applications. Over the years, DMAS has developed Oceans 2.0: an environment where these techniques are implemented. The environment thereby becomes a platform in that it allows for easy addition of new and advanced features that build upon the tools at the core of the system. The applications that have been developed include: data search and retrieval, including options such as data product generation, data decimation or averaging, etc. dynamic infrastructure description (search all observatory metadata) and visualization data visualization, including dynamic scalar data plots, integrated fast video segment search and viewing Building upon these basic applications are new concepts, coming from the Web 2.0 world that DMAS has added: They allow people equipped only with a web browser to collaborate and contribute their findings or work results to the wider community. Examples include: addition of metadata tags to any part of the infrastructure or to any data item (annotations) ability to edit and execute, share and distribute Matlab code on-line, from a simple web browser, with specific calls within the code to access data ability to interactively and graphically build pipeline processing jobs that can be executed on the cloud web-based, interactive instrument control tools that allow users to truly share the use of the instruments and communicate with each other and last but not least: a public tool in the form of a game, that crowd-sources the inventory of the underwater video archive content, thereby adding tremendous amounts of metadata Beyond those tools that represent the functionality presently available to users, a number of the Web Services dedicated to data access are being exposed for anyone to use. This allows not only for ad hoc data access by individuals who need non-interactive access, but will foster the development of new applications in a variety of areas.
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.
Global Federation of Data Services in Seismology: Extending the Concept to Interdisciplinary Science
NASA Astrophysics Data System (ADS)
Ahern, Tim; Trabant, Chad; Stults, Mike; VanFossen, Mick
2016-04-01
The International Federation of Digital Seismograph Networks (FDSN) sets international standards, formats, and access protocols for global seismology. Recently the availability of an FDSN standard for web services has enabled the development of a federated model of data access. With a growing number of internationally distributed data centers supporting compatible web services the task of federation is now fully realizable. The utility of this approach is already starting to bear fruit in seismology. This presentation will highlight the advances the seismological community has made in the past year towards federated access to seismological data including waveforms, earthquake event catalogs, and metadata describing seismic stations. It will include a discussion of an IRIS Federator as well as an emerging effort to develop an FDSN Federator that will allow seamless access to seismological information across multiple FDSN data centers. As part of the NSF EarthCube initiative as well as the US-European data coordination project (COOPEUS), IRIS and several partners, collectively called GeoWS, have been extending the concept of standard web services to other domains. Our primary partners include Lamont Doherty Earth Observatory (marine geophysics), Caltech (tectonic plate reconstructions), SDSC (hydrology), UNAVCO (geodesy), and Unidata (atmospheric sciences). Additionally, IRIS is working with partners at NOAA's National Centers for Environmental Information (NCEI) , NEON, UTEP, WOVOdat, INTERMAGNET, Global Geodynamics Program, and the Ocean Observatory Initiative (OOI) to develop web services for those domains. The ultimate goal is to allow discovery, access, and utilization of cross-domain data sources. One of the significant outcomes of this effort is the development of a simple text and metadata representation for tabular data called GeoCSV, that allows straightforward interpretation of information from multiple domains by non-domain experts.
Knowledge-driven enhancements for task composition in bioinformatics.
Sutherland, Karen; McLeod, Kenneth; Ferguson, Gus; Burger, Albert
2009-10-01
A key application area of semantic technologies is the fast-developing field of bioinformatics. Sealife was a project within this field with the aim of creating semantics-based web browsing capabilities for the Life Sciences. This includes meaningfully linking significant terms from the text of a web page to executable web services. It also involves the semantic mark-up of biological terms, linking them to biomedical ontologies, then discovering and executing services based on terms that interest the user. A system was produced which allows a user to identify terms of interest on a web page and subsequently connects these to a choice of web services which can make use of these inputs. Elements of Artificial Intelligence Planning build on this to present a choice of higher level goals, which can then be broken down to construct a workflow. An Argumentation System was implemented to evaluate the results produced by three different gene expression databases. An evaluation of these modules was carried out on users from a variety of backgrounds. Users with little knowledge of web services were able to achieve tasks that used several services in much less time than they would have taken to do this manually. The Argumentation System was also considered a useful resource and feedback was collected on the best way to present results. Overall the system represents a move forward in helping users to both construct workflows and analyse results by incorporating specific domain knowledge into the software. It also provides a mechanism by which web pages can be linked to web services. However, this work covers a specific domain and much co-ordinated effort is needed to make all web services available for use in such a way, i.e. the integration of underlying knowledge is a difficult but essential task.
NASA Astrophysics Data System (ADS)
Mantas, Vasco M.; Pereira, A. J. S. C.; Liu, Zhong
2013-12-01
A project was devised to develop a set of freely available applications and web services that can (1) simplify access from Mobile Devices to TOVAS data and (2) support the development of new datasets through data repackaging and mash-up. The bottom-up approach enables the multiplication of new services, often of limited direct interest to the organizations that produces the original, global datasets, but significant to small, local users. Through this multiplication of services, the development cost is transferred to the intermediate or end users and the entire process is made more efficient, even allowing new players to use the data in innovative ways.
Realising the Uncertainty Enabled Model Web
NASA Astrophysics Data System (ADS)
Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.
2012-12-01
The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address conversion between uncertainty types, and between the spatial / temporal support of service inputs / outputs. Finally we describe the tools being generated within the UncertWeb project, considering three main aspects: i) Elicitation of uncertainties on model inputs. We are developing tools to enable domain experts to provide judgements about input uncertainties from UncertWeb model components (e.g. parameters in meteorological models) which allow panels of experts to engage in the process and reach a consensus view on the current knowledge / beliefs about that parameter or variable. We are developing systems for continuous and categorical variables as well as stationary spatial fields. ii) Visualisation of the resulting uncertain outputs from the end of the workflow, but also at intermediate steps. At this point we have prototype implementations driven by the requirements from the use cases that motivate UncertWeb. iii) Sensitivity and uncertainty analysis on model outputs. Here we show the design of the overall system we are developing, including the deployment of an emulator framework to allow computationally efficient approaches. We conclude with a summary of the open issues and remaining challenges we are facing in UncertWeb, and provide a brief overview of how we plan to tackle these.
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
NASA Astrophysics Data System (ADS)
Tisdale, M.
2017-12-01
NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying user requirements from government, private, public and academic communities. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), and OGC Web Coverage Services (WCS) while leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams at ASDC are utilizing these services through the development of applications using the Web AppBuilder for ArcGIS and the ArcGIS API for Javascript. These services provide greater exposure of ASDC data holdings to the GIS community and allow for broader sharing and distribution to various end users. These capabilities provide interactive visualization tools and improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry. The presentation will cover how the ASDC is developing geospatial web services and applications to improve data discoverability, accessibility, and interoperability.
NASA Astrophysics Data System (ADS)
Morton, J. J.; Ferrini, V. L.
2015-12-01
The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.
Supporting in- and off-Hospital Patient Management Using a Web-based Integrated Software Platform.
Spyropoulos, Basile; Botsivali, Maria; Tzavaras, Aris; Pierros, Vasileios
2015-01-01
In this paper, a Web-based software platform appropriately designed to support the continuity of health care information and management for both in and out of hospital care is presented. The system has some additional features as it is the formation of continuity of care records and the transmission of referral letters with a semantically annotated web service. The platform's Web-orientation provides significant advantages, allowing for easily accomplished remote access.
Browsing the World Wide Web from behind a firewall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simons, R.W.
1995-02-01
The World Wide Web provides a unified method of access to various information services on the Internet via a variety of protocols. Mosaic and other browsers give users a graphical interface to the Web that is easier to use and more visually pleasing than any other common Internet information service today. The availability of information via the Web and the number of users accessing it have both grown rapidly in the last year. The interest and investment of commercial firms in this technology suggest that in the near future, access to the Web may become as necessary to doing businessmore » as a telephone. This is problematical for organizations that use firewalls to protect their internal networks from the Internet. Allowing all the protocols and types of information found in the Web to pass their firewall will certainly increase the risk of attack by hackers on the Internet. But not allowing access to the Web could be even more dangerous, as frustrated users of the internal network are either unable to do their jobs, or find creative new ways to get around the firewall. The solution to this dilemma adopted at Sandia National Laboratories is described. Discussion also covers risks of accessing the Web, design alternatives considered, and trade-offs used to find the proper balance between access and protection.« less
Web Services and Data Enhancements at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.
2013-12-01
The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest quality waveform from the archive.
NASA Astrophysics Data System (ADS)
Roccatello, E.; Nozzi, A.; Rumor, M.
2013-05-01
This paper illustrates the key concepts behind the design and the development of a framework, based on OGC services, capable to visualize 3D large scale geospatial data streamed over the web. WebGISes are traditionally bounded to a bi-dimensional simplified representation of the reality and though they are successfully addressing the lack of flexibility and simplicity of traditional desktop clients, a lot of effort is still needed to reach desktop GIS features, like 3D visualization. The motivations behind this work lay in the widespread availability of OGC Web Services inside government organizations and in the technology support to HTML 5 and WebGL standard of the web browsers. This delivers an improved user experience, similar to desktop applications, therefore allowing to augment traditional WebGIS features with a 3D visualization framework. This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.
A SOAP Web Service for accessing MODIS land product subsets
DOE Office of Scientific and Technical Information (OSTI.GOV)
SanthanaVannan, Suresh K; Cook, Robert B; Pan, Jerry Yun
2011-01-01
Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. Tomore » overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.« less
Koutelakis, George V.; Anastassopoulos, George K.; Lymberopoulos, Dimitrios K.
2012-01-01
Multiprotocol medical imaging communication through the Internet is more flexible than the tight DICOM transfers. This paper introduces a modular multiprotocol teleradiology architecture that integrates DICOM and common Internet services (based on web, FTP, and E-mail) into a unique operational domain. The extended WADO service (a web extension of DICOM) and the other proposed services allow access to all levels of the DICOM information hierarchy as opposed to solely Object level. A lightweight client site is considered adequate, because the server site of the architecture provides clients with service interfaces through the web as well as invulnerable space for temporary storage, called as User Domains, so that users fulfill their applications' tasks. The proposed teleradiology architecture is pilot implemented using mainly Java-based technologies and is evaluated by engineers in collaboration with doctors. The new architecture ensures flexibility in access, user mobility, and enhanced data security. PMID:22489237
Marketing and Selling CD-ROM Products on the World-Wide Web.
ERIC Educational Resources Information Center
Walker, Becki
1995-01-01
Describes three companies' approaches to marketing and selling CD-ROM products on the World Wide Web. Benefits include low overhead for Internet-based sales, allowance for creativity, and ability to let customers preview products online. Discusses advertising, information delivery, content, information services, and security. (AEF)
Using USNO's API to Obtain Data
NASA Astrophysics Data System (ADS)
Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun
2015-01-01
The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.
TOKEN: Trustable Keystroke-Based Authentication for Web-Based Applications on Smartphones
NASA Astrophysics Data System (ADS)
Nauman, Mohammad; Ali, Tamleek
Smartphones are increasingly being used to store personal information as well as to access sensitive data from the Internet and the cloud. Establishment of the identity of a user requesting information from smartphones is a prerequisite for secure systems in such scenarios. In the past, keystroke-based user identification has been successfully deployed on production-level mobile devices to mitigate the risks associated with naïve username/password based authentication. However, these approaches have two major limitations: they are not applicable to services where authentication occurs outside the domain of the mobile device - such as web-based services; and they often overly tax the limited computational capabilities of mobile devices. In this paper, we propose a protocol for keystroke dynamics analysis which allows web-based applications to make use of remote attestation and delegated keystroke analysis. The end result is an efficient keystroke-based user identification mechanism that strengthens traditional password protected services while mitigating the risks of user profiling by collaborating malicious web services.
Architecture of the local spatial data infrastructure for regional climate change research
NASA Astrophysics Data System (ADS)
Titov, Alexander; Gordov, Evgeny
2013-04-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, etc.) are actively used in modeling and analysis of climate change for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset studies in the area of climate and environmental change require a special software support based on SDI approach. A dedicated architecture of the local spatial data infrastructure aiming at regional climate change analysis using modern web mapping technologies is presented. Geoportal is a key element of any SDI, allowing searching of geoinformation resources (datasets and services) using metadata catalogs, producing geospatial data selections by their parameters (data access functionality) as well as managing services and applications of cartographical visualization. It should be noted that due to objective reasons such as big dataset volume, complexity of data models used, syntactic and semantic differences of various datasets, the development of environmental geodata access, processing and visualization services turns out to be quite a complex task. Those circumstances were taken into account while developing architecture of the local spatial data infrastructure as a universal framework providing geodata services. So that, the architecture presented includes: 1. Effective in terms of search, access, retrieval and subsequent statistical processing, model of storing big sets of regional georeferenced data, allowing in particular to store frequently used values (like monthly and annual climate change indices, etc.), thus providing different temporal views of the datasets 2. General architecture of the corresponding software components handling geospatial datasets within the storage model 3. Metadata catalog describing in detail using ISO 19115 and CF-convention standards datasets used in climate researches as a basic element of the spatial data infrastructure as well as its publication according to OGC CSW (Catalog Service Web) specification 4. Computational and mapping web services to work with geospatial datasets based on OWS (OGC Web Services) standards: WMS, WFS, WPS 5. Geoportal as a key element of thematic regional spatial data infrastructure providing also software framework for dedicated web applications development To realize web mapping services Geoserver software is used since it provides natural WPS implementation as a separate software module. To provide geospatial metadata services GeoNetwork Opensource (http://geonetwork-opensource.org) product is planned to be used for it supports ISO 19115/ISO 19119/ISO 19139 metadata standards as well as ISO CSW 2.0 profile for both client and server. To implement thematic applications based on geospatial web services within the framework of local SDI geoportal the following open source software have been selected: 1. OpenLayers JavaScript library, providing basic web mapping functionality for the thin client such as web browser 2. GeoExt/ExtJS JavaScript libraries for building client-side web applications working with geodata services. The web interface developed will be similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. The work is partially supported by RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2.1 and IP 131.
EnviroAtlas - Austin, TX - Demographics by Block Group Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). This EnviroAtlas dataset is a summary of key demographic groups for the EnviroAtlas community. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
GeoNetwork powered GI-cat: a geoportal hybrid solution
NASA Astrophysics Data System (ADS)
Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo
2010-05-01
To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.
NASA Astrophysics Data System (ADS)
Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir
2010-04-01
This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.
The Footprint Database and Web Services of the Herschel Space Observatory
NASA Astrophysics Data System (ADS)
Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba
2016-10-01
Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.
Konc, Janez; Janezic, Dusanka
2012-07-01
The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si.
Konc, Janez; Janežič, Dušanka
2012-01-01
The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si. PMID:22600737
Mapping and Modeling Web Portal to Advance Global Monitoring and Climate Research
NASA Astrophysics Data System (ADS)
Chang, G.; Malhotra, S.; Bui, B.; Sadaqathulla, S.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Rodriguez, L.; Law, E.
2011-12-01
Today, the principal investigators of NASA Earth Science missions develop their own software to manipulate, visualize, and analyze the data collected from Earth, space, and airborne observation instruments. There is very little, if any, collaboration among these principal investigators due to the lack of collaborative tools, which would allow these scientists to share data and results. At NASA's Jet Propulsion Laboratory (JPL), under the Lunar Mapping and Modeling Project (LMMP), we have built a web portal that exposes a set of common services to users to allow search, visualization, subset, and download lunar science data. Users also have access to a set of tools that visualize, analyze and annotate the data. These services are developed according to industry standards for data access and manipulation, such REST and Open Geospatial Consortium (OGC) web services. As a result, users can access the datasets through custom written applications or off-the-shelf applications such as Google Earth. Even though it's currently used to store and process lunar data, this web portal infrastructure has been designed to support other solar system bodies such as asteroids and planets, including Earth. The infrastructure uses a combination of custom, commercial, and open-source software as well as off-the-shelf hardware and pay-by-use cloud computing services. The use of standardized web service interfaces facilitates platform and application-independent access to the services and data. For instance, we have software clients for the LMMP portal that provide a rich browsing and analysis experience from a variety of platforms including iOS and Android mobile platforms and large screen multi-touch displays with 3-D terrain viewing functions. The service-oriented architecture and design principles utilized in the implementation of the portal lends itself to be reusable and scalable and could naturally be extended to include a collaborative environment that enables scientists and principal investigators to share their research and analysis seamlessly. In addition, this extension will allow users to easily share their tools and data, and to enrich their mapping and analysis experiences. In this talk, we will describe the advanced data management and portal technologies used to power this collaborative environment. We will further illustrate how this environment can enable, enhance and advance global monitoring and climate research.
76 FR 51044 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
.... Project: SAMHSA SOAR Web-Based Data Form--NEW In 2009 the Substance Abuse and Mental Health Services... states. SOAR's primary objective is to improve the allowance rate for Social Security Administration (SSA... SAMHSA's direction developed a web-based data form that case managers can use to track the progress of...
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
SADI, SHARE, and the in silico scientific method
2010-01-01
Background The emergence and uptake of Semantic Web technologies by the Life Sciences provides exciting opportunities for exploring novel ways to conduct in silico science. Web Service Workflows are already becoming first-class objects in “the new way”, and serve as explicit, shareable, referenceable representations of how an experiment was done. In turn, Semantic Web Service projects aim to facilitate workflow construction by biological domain-experts such that workflows can be edited, re-purposed, and re-published by non-informaticians. However the aspects of the scientific method relating to explicit discourse, disagreement, and hypothesis generation have remained relatively impervious to new technologies. Results Here we present SADI and SHARE - a novel Semantic Web Service framework, and a reference implementation of its client libraries. Together, SADI and SHARE allow the semi- or fully-automatic discovery and pipelining of Semantic Web Services in response to ad hoc user queries. Conclusions The semantic behaviours exhibited by SADI and SHARE extend the functionalities provided by Description Logic Reasoners such that novel assertions can be automatically added to a data-set without logical reasoning, but rather by analytical or annotative services. This behaviour might be applied to achieve the “semantification” of those aspects of the in silico scientific method that are not yet supported by Semantic Web technologies. We support this suggestion using an example in the clinical research space. PMID:21210986
SAS- Semantic Annotation Service for Geoscience resources on the web
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.
2015-12-01
There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.
Environmental Models as a Service: Enabling Interoperability ...
Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the
Phylowood: interactive web-based animations of biogeographic and phylogeographic histories.
Landis, Michael J; Bedford, Trevor
2014-01-01
Phylowood is a web service that uses JavaScript to generate in-browser animations of biogeographic and phylogeographic histories from annotated phylogenetic input. The animations are interactive, allowing the user to adjust spatial and temporal resolution, and highlight phylogenetic lineages of interest. All documentation and source code for Phylowood is freely available at https://github.com/mlandis/phylowood, and a live web application is available at https://mlandis.github.io/phylowood.
A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.
Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P
2014-09-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Llnking the EarthScope Data Virtual Catalog to the GEON Portal
NASA Astrophysics Data System (ADS)
Lin, K.; Memon, A.; Baru, C.
2008-12-01
The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.
BingEO: Enable Distributed Earth Observation Data for Environmental Research
NASA Astrophysics Data System (ADS)
Wu, H.; Yang, C.; Xu, Y.
2010-12-01
Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.
Briand, Dominique; Roux, Emmanuel; Desconnets, Jean Christophe; Gervet, Carmen; Barcellos, Christovam
2018-03-21
Since prehistory to present times and despite a rough combat against it, malaria remains a concern for human beings. While evolutions of science and technology through times allowed for some infectious diseases eradication in the 20th century, malaria resists. This review aims at assessing how Internet and web technologies are used in fighting malaria. Precisely, how do malaria fighting actors profit from these developments, how do they deal with ensuing phenomena, such as the increase of data volume, and did these technologies bring new opportunities for fighting malaria? Eleven web platforms linked to spatio-temporal malaria information are reviewed, focusing on data, metadata, web services and categories of users. Though the web platforms are highly heterogeneous the review reveals that the latest advances in web technologies are underused. Information are rarely updated dynamically, metadata catalogues are absent, web services are more and more used, but rarely standardized, and websites are mainly dedicated to scientific communities, essentially researchers. Improvement of systems interoperability, through standardization, is an opportunity to be seized in order to allow real time information exchange and online multisource data analysis. To facilitate multidisciplinary/multiscale studies, the web of linked data and the semantic web innovations can be used in order to formalize the different view points of actors involved in the combat against malaria. By doing so, new malaria fighting strategies could take place, to tackle the bottlenecks listed in the United Nation Millennium Development Goals reports, but also specific issues highlighted by the World Health Organization such as malaria elimination in international borders.
Low-energy, low-budget sensor web enablement of an amateur weather station
NASA Astrophysics Data System (ADS)
Schmidt, G.; Herrnkind, S.; Klump, J.
2008-12-01
Sensor Web Enablement (OGC SWE) has developed in into a powerful concept with many potential applications in environmental monitoring and in other fields. This has spurred development of software applications for Sensor Observation Services (SOS), while the development of client applications still lags behind. Furthermore, the deployment of sensors in the field often places tight constraints on energy and bandwidth available for data capture and transmission. As a "proof of concept" we equipped an amateur weather station with low-budget, standard components to read the data from its base station and feed it into a sensor observation service using its standard web- service interface. We chose the weather station as an example because of its simple measured phenomena and its low data volume. As sensor observation service we chose the open source software package offered by the 52North consortium. Power consumption can be problematic when deploying a sensor platform in the field. Instead of a common PC we used a Network Storage Link Unit (NSLU2) with a Linux operating system, a configuration also known as "Debian SLUG". The power consumption of a "SLUG" is of the order of 2 to 5 Watt, compared to 40W in a small PC. The "SLUG" provides one ethernet and two USB ports, one used by its external USB hard-drive. This modular setup is open to modifications, for example the addition of a GSM modem for data transmission over a cellular telephone network. The simple setup, low price, low power consumption, and the low technological entry-level allow many potential uses of a "SLUG" in environmental sensor networks in research, education and citizen science. The use of a mature sensor observation service software allows an easy integration of monitoring networks with other web services.
EOforge: Generic Open Framework for Earth Observation Data Processing Systems
2006-09-01
Allow the use of existing interfaces, i.e. MUIS: ESA multimission catalogue for EO products. • Support last EO systems technologies, i.e. MASS ...5. Extensibility and configurability to allow customisation and the inclusion of new functionality. 6. Multi-instrument and multi-mission processing...such as: • MUIS: ESA multimission catalogue for EO products. • MASS (Multi-Application Support Service System): ESA web services technology standard
NASA Astrophysics Data System (ADS)
Miller, C. J.; Gasson, D.; Fuentes, E.
2007-10-01
The NOAO NVO Portal is a web application for one-stop discovery, analysis, and access to VO-compliant imaging data and services. The current release allows for GUI-based discovery of nearly a half million images from archives such as the NOAO Science Archive, the Hubble Space Telescope WFPC2 and ACS instruments, XMM-Newton, Chandra, and ESO's INT Wide-Field Survey, among others. The NOAO Portal allows users to view image metadata, footprint wire-frames, FITS image previews, and provides one-click access to science quality imaging data throughout the entire sky via the Firefox web browser (i.e., no applet or code to download). Users can stage images from multiple archives at the NOAO NVO Portal for quick and easy bulk downloads. The NOAO NVO Portal also provides simplified and direct access to VO analysis services, such as the WESIX catalog generation service. We highlight the features of the NOAO NVO Portal (http://nvo.noao.edu).
NASA Technical Reports Server (NTRS)
Burks, Jason E.; Molthan, Andrew L.; McGrath, Kevin M.
2014-01-01
During the last year several significant disasters have occurred such as Superstorm Sandy on the East coast of the United States, and Typhoon Bopha in the Phillipines, along with several others. In support of these disasters NASA's Short-term Prediction Research and Transition (SPoRT) Center delivered various products derived from satellite imagery to help in the assessment of damage and recovery of the affected areas. To better support the decision makers responding to the disasters SPoRT quickly developed several solutions to provide the data using open Geographical Information Service (GIS) formats. Providing the data in open GIS standard formats allowed the end user to easily integrate the data into existing Decision Support Systems (DSS). Both Tile Mapping Service (TMS) and Web Mapping Service (WMS) were leveraged to quickly provide the data to the end-user. Development of the deliver methodology allowed quick response to rapidly developing disasters and enabled NASA SPoRT to bring science data to decision makers in a successful research to operations transition.
NASA Technical Reports Server (NTRS)
Burks, Jason E.; Molthan, Andrew L.; McGrath, Kevin M.
2014-01-01
During the last year several significant disasters have occurred such as Superstorm Sandy on the East coast of the United States, and Typhoon Bopha in the Phillipines, along with several others. In support of these disasters NASA's Short-term Prediction Research and Transition (SPoRT) Center delivered various products derived from satellite imagery to help in the assessment of damage and recovery of the affected areas. To better support the decision makers responding to the disasters SPoRT quickly developed several solutions to provide the data using open Geographical Information Service (GIS) formats. Providing the data in open GIS standard formats allowed the end user to easily integrate the data into existing Decision Support Systems (DSS). Both Tile Mapping Service (TMS) and Web Mapping Service (WMS) were leveraged to quickly provide the data to the end-user. Development of the deliver methodology allowed quick response to rapidly developing disasters and enabled NASA SPoRT to bring science data to decision makers in a successful research to operations transition.
This EnviroAtlas web service contains layers depicting market-based programs and projects addressing ecosystem services protection in the United States. Layers include data collected via surveys and desk research conducted by Forest Trends' Ecosystem Marketplace from 2008 to 2016 on biodiversity (i.e., imperiled species/habitats; wetlands and streams), carbon, and water markets and enabling conditions that facilitate, directly or indirectly, market-based approaches to protecting and investing in those ecosystem services. This dataset was produced by Forest Trends' Ecosystem Marketplace for EnviroAtlas in order to support public access to and use of information related to environmental markets. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
12 CFR 222.24 - Reasonable opportunity to opt out.
Code of Federal Regulations, 2010 CFR
2010-01-01
... at an Internet Web site at which the consumer has obtained a product or service. The consumer... consumer at the time of an electronic transaction, such as a transaction conducted on an Internet Web site... one that allows consumers to indicate that they do not want to opt out. (5) By including in a privacy...
Matos, Ely Edison; Campos, Fernanda; Braga, Regina; Palazzi, Daniele
2010-02-01
The amount of information generated by biological research has lead to an intensive use of models. Mathematical and computational modeling needs accurate description to share, reuse and simulate models as formulated by original authors. In this paper, we introduce the Cell Component Ontology (CelO), expressed in OWL-DL. This ontology captures both the structure of a cell model and the properties of functional components. We use this ontology in a Web project (CelOWS) to describe, query and compose CellML models, using semantic web services. It aims to improve reuse and composition of existent components and allow semantic validation of new models.
Air Quality uFIND: User-oriented Tool Set for Air Quality Data Discovery and Access
NASA Astrophysics Data System (ADS)
Hoijarvi, K.; Robinson, E. M.; Husar, R. B.; Falke, S. R.; Schultz, M. G.; Keating, T. J.
2012-12-01
Historically, there have been major impediments to seamless and effective data usage encountered by both data providers and users. Over the last five years, the international Air Quality (AQ) Community has worked through forums such as the Group on Earth Observations AQ Community of Practice, the ESIP AQ Working Group, and the Task Force on Hemispheric Transport of Air Pollution to converge on data format standards (e.g., netCDF), data access standards (e.g., Open Geospatial Consortium Web Coverage Services), metadata standards (e.g., ISO 19115), as well as other conventions (e.g., CF Naming Convention) in order to build an Air Quality Data Network. The centerpiece of the AQ Data Network is the web service-based tool set: user-oriented Filtering and Identification of Networked Data. The purpose of uFIND is to provide rich and powerful facilities for the user to: a) discover and choose a desired dataset by navigation through the multi-dimensional metadata space using faceted search, b) seamlessly access and browse datasets, and c) use uFINDs facilities as a web service for mashups with other AQ applications and portals. In a user-centric information system such as uFIND, the user experience is improved by metadata that includes the general fields for discovery as well as community-specific metadata to narrow the search beyond space, time and generic keyword searches. However, even with the community-specific additions, the ISO 19115 records were formed in compliance with the standard, so that other standards-based search interface could leverage this additional information. To identify the fields necessary for metadata discovery we started with the ISO 19115 Core Metadata fields and fields that were needed for a Catalog Service for the Web (CSW) Record. This fulfilled two goals - one to create valid ISO 19115 records and the other to be able to retrieve the records through a Catalog Service for the Web query. Beyond the required set of fields, the AQ Community added additional fields using a combination of keywords and ISO 19115 fields. These extensions allow discovery by measurement platform or observed phenomena. Beyond discovery metadata, the AQ records include service identification objects that allow standards-based clients, such as some brokers, to access the data found via OGC WCS or WMS data access protocols. uFIND, is one such smart client, this combination of discovery and access metadata allows the user to preview each registered dataset through spatial and temporal views; observe the data access and usage pattern and also find links to dataset-specific metadata directly in uFIND. The AQ data providers also benefit from this architecture since their data products are easier to find and re-use, enhancing the relevance and importance of their products. Finally, the earth science community at large benefits from the Service Oriented Architecture of uFIND, since it is a service itself and allows service-based interfacing with providers and users of the metadata, allowing uFIND facets to be further refined for a particular AQ application or completely repurposed for other Earth Science domains that use the same set of data access and metadata standards.
NASA Astrophysics Data System (ADS)
Kerschke, Dorit; Schilling, Maik; Simon, Andreas; Wächter, Joachim
2014-05-01
The Energiewende and the increasing scarcity of raw materials will lead to an intensified utilization of the subsurface in Germany. Within this context, geological 3D modeling is a fundamental approach for integrated decision and planning processes. Initiated by the development of the European Geospatial Infrastructure INSPIRE, the German State Geological Offices started digitizing their predominantly analog archive inventory. Until now, a comprehensive 3D subsurface model of Brandenburg did not exist. Therefore the project B3D strived to develop a new 3D model as well as a subsequent infrastructure node to integrate all geological and spatial data within the Geodaten-Infrastruktur Brandenburg (Geospatial Infrastructure, GDI-BB) and provide it to the public through an interactive 2D/3D web application. The functionality of the web application is based on a client-server architecture. Server-sided, all available spatial data is published through GeoServer. GeoServer is designed for interoperability and acts as the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) standard that provides the interface that allows requests for geographical features. In addition, GeoServer implements, among others, the high performance certified compliant Web Map Service (WMS) that serves geo-referenced map images. For publishing 3D data, the OGC Web 3D Service (W3DS), a portrayal service for three-dimensional geo-data, is used. The W3DS displays elements representing the geometry, appearance, and behavior of geographic objects. On the client side, the web application is solely based on Free and Open Source Software and leans on the JavaScript API WebGL that allows the interactive rendering of 2D and 3D graphics by means of GPU accelerated usage of physics and image processing as part of the web page canvas without the use of plug-ins. WebGL is supported by most web browsers (e.g., Google Chrome, Mozilla Firefox, Safari, and Opera). The web application enables an intuitive navigation through all available information and allows the visualization of geological maps (2D), seismic transects (2D/3D), wells (2D/3D), and the 3D-model. These achievements will alleviate spatial and geological data management within the German State Geological Offices and foster the interoperability of heterogeneous systems. It will provide guidance to a systematic subsurface management across system, domain and administrative boundaries on the basis of a federated spatial data infrastructure, and include the public in the decision processes (e-Governance). Yet, the interoperability of the systems has to be strongly propelled forward through agreements on standards that need to be decided upon in responsible committees. The project B3D is funded with resources from the European Fund for Regional Development (EFRE).
Beyond accuracy: creating interoperable and scalable text-mining web services.
Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong
2016-06-15
The biomedical literature is a knowledge-rich resource and an important foundation for future research. With over 24 million articles in PubMed and an increasing growth rate, research in automated text processing is becoming increasingly important. We report here our recently developed web-based text mining services for biomedical concept recognition and normalization. Unlike most text-mining software tools, our web services integrate several state-of-the-art entity tagging systems (DNorm, GNormPlus, SR4GN, tmChem and tmVar) and offer a batch-processing mode able to process arbitrary text input (e.g. scholarly publications, patents and medical records) in multiple formats (e.g. BioC). We support multiple standards to make our service interoperable and allow simpler integration with other text-processing pipelines. To maximize scalability, we have preprocessed all PubMed articles, and use a computer cluster for processing large requests of arbitrary text. Our text-mining web service is freely available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#curl : Zhiyong.Lu@nih.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Footprint Database and web services for the Herschel space observatory
NASA Astrophysics Data System (ADS)
Verebélyi, Erika; Dobos, László; Kiss, Csaba
2015-08-01
Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.
Deploying and sharing U-Compare workflows as web services.
Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia
2013-02-18
U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.
Deploying and sharing U-Compare workflows as web services
2013-01-01
Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017
NASA Astrophysics Data System (ADS)
Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.
2014-12-01
Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in JavaScript, css and HTML, as well as faster and more efficient web browsers, including mobile. It is foreseeable that in the near future, web applications are as powerful and efficient as native applications. Hence the work described here has been the first step towards bringing the Open Source Earthworm seismic data processing system to this new paradigm.
docBUILDER - Building Your Useful Metadata for Earth Science Data and Services.
NASA Astrophysics Data System (ADS)
Weir, H. M.; Pollack, J.; Olsen, L. M.; Major, G. R.
2005-12-01
The docBUILDER tool, created by NASA's Global Change Master Directory (GCMD), assists the scientific community in efficiently creating quality data and services metadata. Metadata authors are asked to complete five required fields to ensure enough information is provided for users to discover the data and related services they seek. After the metadata record is submitted to the GCMD, it is reviewed for semantic and syntactic consistency. Currently, two versions are available - a Web-based tool accessible with most browsers (docBUILDERweb) and a stand-alone desktop application (docBUILDERsolo). The Web version is available through the GCMD website, at http://gcmd.nasa.gov/User/authoring.html. This version has been updated and now offers: personalized templates to ease entering similar information for multiple data sets/services; automatic population of Data Center/Service Provider URLs based on the selected center/provider; three-color support to indicate required, recommended, and optional fields; an editable text window containing the XML record, to allow for quick editing; and improved overall performance and presentation. The docBUILDERsolo version offers the ability to create metadata records on a computer wherever you are. Except for installation and the occasional update of keywords, data/service providers are not required to have an Internet connection. This freedom will allow users with portable computers (Windows, Mac, and Linux) to create records in field campaigns, whether in Antarctica or the Australian Outback. This version also offers a spell-checker, in addition to all of the features found in the Web version.
The BCube Crawler: Web Scale Data and Service Discovery for EarthCube.
NASA Astrophysics Data System (ADS)
Lopez, L. A.; Khalsa, S. J. S.; Duerr, R.; Tayachow, A.; Mingo, E.
2014-12-01
Web-crawling, a core component of the NSF-funded BCube project, is researching and applying the use of big data technologies to find and characterize different types of web services, catalog interfaces, and data feeds such as the ESIP OpenSearch, OGC W*S, THREDDS, and OAI-PMH that describe or provide access to scientific datasets. Given the scale of the Internet, which challenges even large search providers such as Google, the BCube plan for discovering these web accessible services is to subdivide the problem into three smaller, more tractable issues. The first, to be able to discover likely sites where relevant data and data services might be found, the second, to be able to deeply crawl the sites discovered to find any data and services which might be present. Lastly, to leverage the use of semantic technologies to characterize the services and data found, and to filter out everything but those relevant to the geosciences. To address the first two challenges BCube uses an adapted version of Apache Nutch (which originated Hadoop), a web scale crawler, and Amazon's ElasticMapReduce service for flexibility and cost effectiveness. For characterization of the services found, BCube is examining existing web service ontologies for their applicability to our needs and will re-use and/or extend these in order to query for services with specific well-defined characteristics in scientific datasets such as the use of geospatial namespaces. The original proposal for the crawler won a grant from Amazon's academic program, which allowed us to become operational; we successfully tested the Bcube Crawler at web scale obtaining a significant corpus, sizeable enough to enable work on characterization of the services and data found. There is still plenty of work to be done, doing "smart crawls" by managing the frontier, developing and enhancing our scoring algorithms and fully implementing the semantic characterization technologies. We describe the current status of the project, our successes and issues encountered. The final goal of the BCube crawler project is to provide relevant data services to other projects on the EarthCube stack and third party partners so they can be brokered and used by a wider scientific community.
A Web Tool for Research in Nonlinear Optics
NASA Astrophysics Data System (ADS)
Prikhod'ko, Nikolay V.; Abramovsky, Viktor A.; Abramovskaya, Natalia V.; Demichev, Andrey P.; Kryukov, Alexandr P.; Polyakov, Stanislav P.
2016-02-01
This paper presents a project of developing the web platform called WebNLO for computer modeling of nonlinear optics phenomena. We discuss a general scheme of the platform and a model for interaction between the platform modules. The platform is built as a set of interacting RESTful web services (SaaS approach). Users can interact with the platform through a web browser or command line interface. Such a resource has no analogues in the field of nonlinear optics and will be created for the first time therefore allowing researchers to access high-performance computing resources that will significantly reduce the cost of the research and development process.
Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service
NASA Astrophysics Data System (ADS)
Nonogaki, S.; Nemoto, T.
2014-12-01
Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.
NASA Astrophysics Data System (ADS)
Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert
2012-12-01
Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.
QoS Adaptation in Multimedia Multicast Conference Applications for E-Learning Services
ERIC Educational Resources Information Center
Deusdado, Sérgio; Carvalho, Paulo
2006-01-01
The evolution of the World Wide Web service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development and allowing improved interactivity and prohuman relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of…
Sharing environmental models: An Approach using GitHub repositories and Web Processing Services
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Nuest, Daniel; Pross, Benjamin
2016-04-01
The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project established a spatial data infrastructure for scientific geospatial data and metadata (http://geoportal-glues.ufz.de), where different regional collaborative projects researching the impacts of climate and socio-economic changes on sustainable land management can share their underlying base scenarios and datasets. One goal of the project is to ease the sharing of computational models between institutions and to make them easily executable in Web-based infrastructures. In this work, we present such an approach for sharing computational models relying on GitHub repositories (http://github.com) and Web Processing Services. At first, model providers upload their model implementations to GitHub repositories in order to share them with others. The GitHub platform allows users to submit changes to the model code. The changes can be discussed and reviewed before merging them. However, while GitHub allows sharing and collaborating of model source code, it does not actually allow running these models, which requires efforts to transfer the implementation to a model execution framework. We thus have extended an existing implementation of the OGC Web Processing Service standard (http://www.opengeospatial.org/standards/wps), the 52°North Web Processing Service (http://52north.org/wps) platform to retrieve all model implementations from a git (http://git-scm.com) repository and add them to the collection of published geoprocesses. The current implementation is restricted to models implemented as R scripts using WPS4R annotations (Hinz et al.) and to Java algorithms using the 52°North WPS Java API. The models hence become executable through a standardized Web API by multiple clients such as desktop or browser GIS and modelling frameworks. If the model code is changed on the GitHub platform, the changes are retrieved by the service and the processes will be updated accordingly. The admin tool of the 52°North WPS was extended to support automated retrieval and deployment of computational models from GitHub repositories. Once the R code is available in the GitHub repo, the contained process can be easily deployed and executed by simply defining the GitHub repository URL in the WPS admin tool. We illustrate the usage of the approach by sharing and running a model for land use system archetypes developed by the Helmholtz Centre for Environmental Research (UFZ, see Vaclavik et al.). The original R code was extended and published in the 52°North WPS using both, public and non-public datasets (Nüst et al., see also https://github.com/52North/glues-wps). Hosting the analysis in a Git repository now allows WPS administrators, client developers, and modelers to easily work together on new versions or completely new web processes using the powerful GitHub collaboration platform. References: Hinz, M. et. al. (2013): Spatial Statistics on the Geospatial Web. In: The 16th AGILE International Conference on Geographic Information Science, Short Papers. http://www.agile-online.org/Conference_Paper/CDs/agile_2013/Short_Papers/SP_S3.1_Hinz.pdf Nüst, D. et. al.: (2015): Open and reproducible global land use classification. In: EGU General Assembly Conference Abstracts . Vol. 17. European Geophysical Union, 2015, p. 9125, http://meetingorganizer.copernicus. org/EGU2015/EGU2015- 9125.pdf Vaclavik, T., et. al. (2013): Mapping global land system archetypes. Global Environmental Change 23(6): 1637-1647. Online available: October 9, 2013, DOI: 10.1016/j.gloenvcha.2013.09.004
Globus: Service and Platform for Research Data Lifecycle Management
NASA Astrophysics Data System (ADS)
Ananthakrishnan, R.; Foster, I.
2017-12-01
Globus offers a range of data management capabilities to the community as hosted services, encompassing data transfer and sharing, user identity and authorization, and data publication. Globus capabilities are accessible via both a web browser and REST APIs. Web access allows researchers to use Globus capabilities through a software-as-a-service model; and the REST APIs address the needs of developers of research services, who can now use Globus as a platform, outsourcing complex user and data management tasks to Globus services. In this presentation, we review Globus capabilities and outline how it is being applied as a platform for scientific services, and highlight work done to link computational analysis flows to the underlying data through an interactive Jupyter notebook environment to promote immediate data usability, reusability of these flows by other researchers, and future analysis extensibility.
Troshin, Peter V; Procter, James B; Sherstnev, Alexander; Barton, Daniel L; Madeira, Fábio; Barton, Geoffrey J
2018-06-01
JABAWS 2.2 is a computational framework that simplifies the deployment of web services for Bioinformatics. In addition to the five multiple sequence alignment (MSA) algorithms in JABAWS 1.0, JABAWS 2.2 includes three additional MSA programs (Clustal Omega, MSAprobs, GLprobs), four protein disorder prediction methods (DisEMBL, IUPred, Ronn, GlobPlot), 18 measures of protein conservation as implemented in AACon, and RNA secondary structure prediction by the RNAalifold program. JABAWS 2.2 can be deployed on a variety of in-house or hosted systems. JABAWS 2.2 web services may be accessed from the Jalview multiple sequence analysis workbench (Version 2.8 and later), as well as directly via the JABAWS command line interface (CLI) client. JABAWS 2.2 can be deployed on a local virtual server as a Virtual Appliance (VA) or simply as a Web Application Archive (WAR) for private use. Improvements in JABAWS 2.2 also include simplified installation and a range of utility tools for usage statistics collection, and web services querying and monitoring. The JABAWS CLI client has been updated to support all the new services and allow integration of JABAWS 2.2 services into conventional scripts. A public JABAWS 2 server has been in production since December 2011 and served over 800 000 analyses for users worldwide. JABAWS 2.2 is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws. g.j.barton@dundee.ac.uk.
EnviroAtlas - Metrics for Austin, TX
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web service depict ecosystem services at the census block group level for the community of Austin, Texas. These layers illustrate the ecosystems and natural resources that are associated with clean air (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanAir/MapServer); clean and plentiful water (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_CleanPlentifulWater/MapServer); natural hazard mitigation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_NaturalHazardMitigation/MapServer); climate stabilization (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_ClimateStabilization/MapServer); food, fuel, and materials (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_FoodFuelMaterials/MapServer); recreation, culture, and aesthetics (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_RecreationCultureAesthetics/MapServer); and biodiversity conservation (https://enviroatlas.epa.gov/arcgis/rest/services/Communities/ESC_ATX_BiodiversityConservation/MapServer), and factors that place stress on those resources. EnviroAtlas allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the conterminous United States as well as de
Sensor web enablement in a network of low-energy, low-budget amateur weather stations
NASA Astrophysics Data System (ADS)
Herrnkind, S.; Klump, J.; Schmidt, G.
2009-04-01
Sensor Web Enablement (OGC SWE) has developed in into a powerful concept with many potential applications in environmental monitoring and in other fields. This has spurred development of software applications for Sensor Observation Services (SOS), while the development of client applications still lags behind. Furthermore, the deployment of sensors in the field often places tight constraints on energy and bandwidth available for data capture and transmission. As a „proof of concept" we equipped amateur weather stations with low-budget, standard components to read the data from its base station and feed the weather observation data into the sensor observation service using its standard web-service interface. We chose amateur weather station as an example because of the simplicity of measured phenomena and low data volume. As sensor observation service we chose the open source software package offered by the 52°North consortium. Furthermore, we investigated registry services for sensors and measured phenomena. When deploying a sensor platform in the field, power consumption can be an issue. Instead of common PCs we used Network Storage Link Units (NSLU2) with a Linux operating system, also known as "Debian SLUG". The power consumption of a "SLUG" is of the order of 1W, compared to 40W in a small PC. The "SLUG" provides one ethernet and two USB ports, one used by its external USB hard-drive. This modular set-up is open to modifications, for example the addition of a GSM modem for data transmission over a cellular telephone network. The simple set-up, low price, low power consumption, and the low technological entry-level allow many potential uses of a "SLUG" in environmental sensor networks in research, education and citizen science. The use of a mature sensor observation service software allows an easy integration of monitoring networks with other web services.
Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin
2015-01-01
With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.
2015-01-01
Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840
EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models
NASA Astrophysics Data System (ADS)
Laxton, John; Sen, Marcus; Passmore, James
2013-04-01
EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be constructed to calculate, for example, the thickness between two surfaces in a 3D model or the depth from ground surface to the top of a particular geologic unit. In the first version of the service a simple interface showing some example queries has been implemented in order to show the potential of the technologies. The project aims to develop the services available in light of user feedback, both in terms of the data available, the functionality and the interface. User feedback on the services guides the software and standards development aspects of the project, leading to enhanced versions of the software which will be implemented in upgraded versions of the services during the lifetime of the project.
Evolution of System Architectures: Where Do We Need to Fail Next?
NASA Astrophysics Data System (ADS)
Bermudez, Luis; Alameh, Nadine; Percivall, George
2013-04-01
Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.
Free Factories: Unified Infrastructure for Data Intensive Web Services
Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.
2010-01-01
We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356
Operational Use of OGC Web Services at the Met Office
NASA Astrophysics Data System (ADS)
Wright, Bruce
2010-05-01
The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or graphs, and combined with the WMS pre-rendered images and text, in a FLEX application, to provide sophisticated, user impact-based view of the weather. The OGC web services supporting these applications have been developed in collaboration with commercial companies. Visual Weather was originally a desktop application for forecasters, but IBL have developed it to expose the full range of forecast and observation data through standard web services (WCS and WMS). Forecasts and observations relating to specific locations and geographic features are held in an Oracle Database, and exposed as a WFS using Snowflake Software's GO-Publisher application. The Met Office has worked closely with both IBL and Snowflake Software to ensure that the web services provided strike a balance between conformance to the standards and performance in an operational environment. This has proved challenging in areas where the standards are rapidly evolving (e.g. WCS) or do not allow adequate description of the Met-Ocean domain (e.g. multiple time coordinates and parametric vertical coordinates). It has also become clear that careful selection of the features to expose, based on the way in which you expect users to query those features, in necessary in order to deliver adequate performance. These experiences are providing useful 'real-world' input in to the recently launched OGC MetOcean Domain Working Group and World Meteorological Organisation (WMO) initiatives in this area.
Struct2Net: a web service to predict protein–protein interactions using a structure-based approach
Singh, Rohit; Park, Daniel; Xu, Jinbo; Hosur, Raghavendra; Berger, Bonnie
2010-01-01
Struct2Net is a web server for predicting interactions between arbitrary protein pairs using a structure-based approach. Prediction of protein–protein interactions (PPIs) is a central area of interest and successful prediction would provide leads for experiments and drug design; however, the experimental coverage of the PPI interactome remains inadequate. We believe that Struct2Net is the first community-wide resource to provide structure-based PPI predictions that go beyond homology modeling. Also, most web-resources for predicting PPIs currently rely on functional genomic data (e.g. GO annotation, gene expression, cellular localization, etc.). Our structure-based approach is independent of such methods and only requires the sequence information of the proteins being queried. The web service allows multiple querying options, aimed at maximizing flexibility. For the most commonly studied organisms (fly, human and yeast), predictions have been pre-computed and can be retrieved almost instantaneously. For proteins from other species, users have the option of getting a quick-but-approximate result (using orthology over pre-computed results) or having a full-blown computation performed. The web service is freely available at http://struct2net.csail.mit.edu. PMID:20513650
Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project
NASA Astrophysics Data System (ADS)
Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.
2011-12-01
The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.
NaviCell Web Service for network-based data visualization.
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei
2015-07-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NaviCell Web Service for network-based data visualization
Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei
2015-01-01
Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393
A RESTful interface to pseudonymization services in modern web applications.
Lablans, Martin; Borg, Andreas; Ückert, Frank
2015-02-07
Medical research networks rely on record linkage and pseudonymization to determine which records from different sources relate to the same patient. To establish informational separation of powers, the required identifying data are redirected to a trusted third party that has, in turn, no access to medical data. This pseudonymization service receives identifying data, compares them with a list of already reported patient records and replies with a (new or existing) pseudonym. We found existing solutions to be technically outdated, complex to implement or not suitable for internet-based research infrastructures. In this article, we propose a new RESTful pseudonymization interface tailored for use in web applications accessed by modern web browsers. The interface is modelled as a resource-oriented architecture, which is based on the representational state transfer (REST) architectural style. We translated typical use-cases into resources to be manipulated with well-known HTTP verbs. Patients can be re-identified in real-time by authorized users' web browsers using temporary identifiers. We encourage the use of PID strings for pseudonyms and the EpiLink algorithm for record linkage. As a proof of concept, we developed a Java Servlet as reference implementation. The following resources have been identified: Sessions allow data associated with a client to be stored beyond a single request while still maintaining statelessness. Tokens authorize for a specified action and thus allow the delegation of authentication. Patients are identified by one or more pseudonyms and carry identifying fields. Relying on HTTP calls alone, the interface is firewall-friendly. The reference implementation has proven to be production stable. The RESTful pseudonymization interface fits the requirements of web-based scenarios and allows building applications that make pseudonymization transparent to the user using ordinary web technology. The open-source reference implementation implements the web interface as well as a scientifically grounded algorithm to generate non-speaking pseudonyms.
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Technical Reports Server (NTRS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-01-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining .the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning 1001 when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide Field Xray Telescope (WFXT), as part of an end to end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control.
Global Federation of Data Services in Seismology: Extending the Concept to Interdisciplinary Science
NASA Astrophysics Data System (ADS)
Ahern, T. K.; Trabant, C. M.; Stults, M.; Van Fossen, M.
2015-12-01
The International Federation of Digital Seismograph Networks (FDSN) sets international standards, formats, and access protocols for global seismology. Recently the availability of an FDSN standard for web services has enabled the development of a federated model of data access. With a growing number of internationally distributed data centers supporting identical web services the task of federation is now fully realizable. This presentation will highlight the advances the seismological community has made in the past year towards federated access to seismological data including waveforms, earthquake event catalogs, and metadata describing seismic stations. As part of the NSF EarthCube project, IRIS and its partners have been extending the concept of standard web services to other domains. Our primary partners include Lamont Doherty Earth Observatory (marine geophysics), Caltech (tectonic plate reconstructions), SDSC (hydrology), UNAVCO (geodesy), and Unidata (atmospheric sciences). Additionally IRIS is working with partners at NOAA's NGDC, NEON, UTEP, WOVODAT, Intermagnet, Global Geodynamics Program, and the Ocean Observatory Initiative (OOI) to develop web services for those domains. The ultimate goal is to allow discovery, access, and utilization of cross-domain data sources. IRIS and a variety of US and European partners have been involved in the Cooperation between Europe and the US (CoopEUS) project where interdisciplinary data integration is a key topic.
A Web-based Visualization System for Three Dimensional Geological Model using Open GIS
NASA Astrophysics Data System (ADS)
Nemoto, T.; Masumoto, S.; Nonogaki, S.
2017-12-01
A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.
Cross-Platform User Interface of E-Learning Applications
ERIC Educational Resources Information Center
Stoces, Michal; Masner, Jan; Jarolímek, Jan; Šimek, Pavel; Vanek, Jirí; Ulman, Miloš
2015-01-01
The paper discusses the development of Web educational services for specific groups. A key feature is to allow the display and use of educational materials and training services to the widest possible set of different devices, especially in the browser classic desktop computers, notebooks, tablets, mobile phones and also on different readers for…
Enabling On-Demand Database Computing with MIT SuperCloud Database Management System
2015-09-15
arc.liv.ac.uk/trac/SGE) provides these services and is independent of programming language (C, Fortran, Java , Matlab, etc) or parallel programming...a MySQL database to store DNS records. The DNS records are controlled via a simple web service interface that allows records to be created
Cervone, Maria A; Savel, Thomas G
2006-01-01
The National Center on Birth Defects and Developmental Disabilities (NCBDDD) at the Centers for Disease Control and Prevention (CDC) sought to establish a database to proactively manage their partner relationships with external organizations. A user needs analysis was conducted, and CDC's Public Health Information Network Directory (PHINDIR) was evaluated as a possible solution. PHINDIR could sufficiently maintain contact information but did not address customer relationships; however, its flexible architecture allows add-on applications via web services. Thus, NCBDDD's needs could be met via PHINDIR.
Comprehensive multiplatform collaboration
NASA Astrophysics Data System (ADS)
Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.
2003-12-01
We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Booth, N.; Walker, J.; Kunicki, T.
2012-12-01
The U.S. Geological Survey Center for Integrated Data Analytics (CIDA), in holding with the President's Digital Government Strategy and the Department of Interior's IT Transformation initiative, has evolved its data center and application architecture toward the "cloud" paradigm. In this case, "cloud" refers to a goal of developing services that may be distributed to infrastructure anywhere on the Internet. This transition has taken place across the entire data management spectrum from data center location to physical hardware configuration to software design and implementation. In CIDA's case, physical hardware resides in Madison at the Wisconsin Water Science Center, in South Dakota at the Earth Resources Observation and Science Center (EROS), and in the near future at a DOI approved commercial vendor. Tasks normally conducted on desktop-based GIS software with local copies of data in proprietary formats are now done using browser-based interfaces to web processing services drawing on a network of standard data-source web services. Organizations are gaining economies of scale through data center consolidation and the creation of private cloud services as well as taking advantage of the commoditization of data processing services. Leveraging open standards for data and data management take advantage of this commoditization and provide the means to reliably build distributed service based systems. This presentation will use CIDA's experience as an illustration of the benefits and hurdles of moving to the cloud. Replicating, reformatting, and processing large data sets, such as downscaled climate projections, traditionally present a substantial challenge to environmental science researchers who need access to data subsets and derived products. The USGS Geo Data Portal (GDP) project uses cloud concepts to help earth system scientists' access subsets, spatial summaries, and derivatives of commonly needed very large data. The GDP project has developed a reusable architecture and advanced processing services that currently accesses archives hosted at Lawrence Livermore National Lab, Oregon State University, the University Corporation for Atmospheric Research, and the U.S. Geological Survey, among others. Several examples of how the GDP project uses cloud concepts will be highlighted in this presentation: 1) The high bandwidth network connectivity of large data centers reduces the need for data replication and storage local to processing services. 2) Standard data serving web services, like OPeNDAP, Web Coverage Services, and Web Feature Services allow GDP services to remotely access custom subsets of data in a variety of formats, further reducing the need for data replication and reformatting. 3) The GDP services use standard web service APIs to allow browser-based user interfaces to run complex and compute-intensive processes for users from any computer with an Internet connection. The combination of physical infrastructure and application architecture implemented for the Geo Data Portal project offer an operational example of how distributed data and processing on the cloud can be used to aid earth system science.
Supporting NEESPI with Data Services - The SIB-ESS-C e-Infrastructure
NASA Astrophysics Data System (ADS)
Gerlach, R.; Schmullius, C.; Frotscher, K.
2009-04-01
Data discovery and retrieval is commonly among the first steps performed for any Earth science study. The way scientific data is searched and accessed has changed significantly over the past two decades. Especially the development of the World Wide Web and the technologies that evolved along shortened the data discovery and data exchange process. On the other hand the amount of data collected and distributed by earth scientists has increased exponentially requiring new concepts for data management and sharing. One such concept to meet the demand is to build up Spatial Data Infrastructures (SDI) or e-Infrastructures. These infrastructures usually contain components for data discovery allowing users (or other systems) to query a catalogue or registry and retrieve metadata information on available data holdings and services. Data access is typically granted using FTP/HTTP protocols or, more advanced, through Web Services. A Service Oriented Architecture (SOA) approach based on standardized services enables users to benefit from interoperability among different systems and to integrate distributed services into their application. The Siberian Earth System Science Cluster (SIB-ESS-C) being established at the University of Jena (Germany) is such a spatial data infrastructure following these principles and implementing standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO). The prime objective is to provide researchers with focus on Siberia with the technical means for data discovery, data access, data publication and data analysis. The region of interest covers the entire Asian part of the Russian Federation from the Ural to the Pacific Ocean including the Ob-, Lena- and Yenissey river catchments. The aim of SIB-ESS-C is to provide a comprehensive set of data products for Earth system science in this region. Although SIB-ESS-C will be equipped with processing capabilities for in-house data generation (mainly from Earth Observation), current data holdings of SIB-ESS-C have been created in collaboration with a number of partners in previous and ongoing research projects (e.g. SIBERIA-II, SibFORD, IRIS). At the current development stage the SIB-ESS-C system comprises a federated metadata catalogue accessible through the SIB-ESS-C Web Portal or from any OGC-CSW compliant client. Due to full interoperability with other metadata catalogues users of the SIB-ESS-C Web Portal are able to search external metadata repositories. The Web Portal contains also a simple visualization component which will be extended to a comprehensive visualization and analysis tool in the near future. All data products are already accessible as a Web Mapping Service and will be made available as Web Feature and Web Coverage Services soon allowing users to directly incorporate the data into their application. The SIB-ESS-C infrastructure will be further developed as one node in a network of similar systems (e.g. NASA GIOVANNI) in the NEESPI region.
Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl ES
2009-01-01
Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics. PMID:19732427
Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl E S
2009-09-04
Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.
Incorporating Web 2.0 Technologies from an Organizational Perspective
NASA Astrophysics Data System (ADS)
Owens, R.
2009-12-01
The Arctic Research Consortium of the United States (ARCUS) provides support for the organization, facilitation, and dissemination of online educational and scientific materials and information to a wide range of stakeholders. ARCUS is currently weaving the fabric of Web 2.0 technologies—web development featuring interactive information sharing and user-centered design—into its structure, both as a tool for information management and for educational outreach. The importance of planning, developing, and maintaining a cohesive online platform in order to integrate data storage and dissemination will be discussed in this presentation, as well as some specific open source technologies and tools currently available, including: ○ Content Management: Any system set up to manage the content of web sites and services. Drupal is a content management system, built in a modular fashion allowing for a powerful set of features including, but not limited to weblogs, forums, event calendars, polling, and more. ○ Faceted Search: Combined with full text indexing, faceted searching allows site visitors to locate information quickly and then provides a set of 'filters' with which to narrow the search results. Apache Solr is a search server with a web-services like API (Application programming interface) that has built in support for faceted searching. ○ Semantic Web: The semantic web refers to the ongoing evolution of the World Wide Web as it begins to incorporate semantic components, which aid in processing requests. OpenCalais is a web service that uses natural language processing, along with other methods, in order to extract meaningful 'tags' from your content. This metadata can then be used to connect people, places, and things throughout your website, enriching the surfing experience for the end user. ○ Web Widgets: A web widget is a portable 'piece of code' that can be embedded easily into web pages by an end user. Timeline is a widget developed as part of the SIMILE project at MIT (Massachusetts Institute of Technology) for displaying time-based events in a clean, horizontal timeline display. Numerous standards, applications, and 3rd party integration services are also available for use in today's Web 2.0 environment. In addition to a cohesive online platform, the following tools can improve networking, information sharing, and increased scientific and educational collaboration: ○ Facebook (Fan pages, social networking, etc) ○ Twitter/Twitterfeed (Automatic updates in 3 steps) ○ Mobify.me (Mobile web) ○ Wimba, Adobe Connect, etc (real time conferencing) Increasingly, the scientific community is being asked to share data and information within and outside disciplines, with K-12 students, and with members of the public and policy-makers. Web 2.0 technologies can easily be set up and utilized to share data and other information to specific audiences in real time, and their simplicity ensures their increasing use by the science community in years to come.
Teixeira, Leonor; Saavedra, Vasco; Ferreira, Carlos; Sousa Santos, Beatriz
2010-01-01
Modern methods of information and communication that use web technologies provide an opportunity to facilitate closer communication between patients and healthcare providers, allowing a joint management of chronic diseases. This paper describes a web-based technological solution to support the management of inherited bleeding disorders integrating, diffusing and archiving large sets of data relating to the clinical practice of hemophilia care, more specifically the clinical practice at the Hematology Service of Coimbra Hospital Center (a Hemophilia Treatment Center located in Portugal).
NASA Astrophysics Data System (ADS)
Ninsawat, Sarawut; Yamamoto, Hirokazu; Kamei, Akihide; Nakamura, Ryosuke; Tsuchida, Satoshi; Maeda, Takahisa
2010-05-01
With the availability of network enabled sensing devices, the volume of information being collected by networked sensors has increased dramatically in recent years. Over 100 physical, chemical and biological properties can be sensed using in-situ or remote sensing technology. A collection of these sensor nodes forms a sensor network, which is easily deployable to provide a high degree of visibility into real-world physical processes as events unfold. The sensor observation network could allow gathering of diverse types of data at greater spatial and temporal resolution, through the use of wired or wireless network infrastructure, thus real-time or near-real time data from sensor observation network allow researchers and decision-makers to respond speedily to events. However, in the case of environmental monitoring, only a capability to acquire in-situ data periodically is not sufficient but also the management and proper utilization of data also need to be careful consideration. It requires the implementation of database and IT solutions that are robust, scalable and able to interoperate between difference and distributed stakeholders to provide lucid, timely and accurate update to researchers, planners and citizens. The GEO (Global Earth Observation) Grid is primarily aiming at providing an e-Science infrastructure for the earth science community. The GEO Grid is designed to integrate various kinds of data related to the earth observation using the grid technology, which is developed for sharing data, storage, and computational powers of high performance computing, and is accessible as a set of services. A comprehensive web-based system for integrating field sensor and data satellite image based on various open standards of OGC (Open Geospatial Consortium) specifications has been developed. Web Processing Service (WPS), which is most likely the future direction of Web-GIS, performs the computation of spatial data from distributed data sources and returns the outcome in a standard format. The interoperability capabilities and Service Oriented Architecture (SOA) of web services allow incorporating between sensor network measurement available from Sensor Observation Service (SOS) and satellite remote sensing data from Web Mapping Service (WMS) as distributed data sources for WPS. Various applications have been developed to demonstrate the efficacy of integrating heterogeneous data source. For example, the validation of the MODIS aerosol products (MOD08_D3, the Level-3 MODIS Atmosphere Daily Global Product) by ground-based measurements using the sunphotometer (skyradiometer, Prede POM-02) installed at Phenological Eyes Network (PEN) sites in Japan. Furthermore, the web-based framework system for studying a relationship between calculated Vegetation Index from MODIS satellite image surface reflectance (MOD09GA, the Surface Reflectance Daily L2G Global 1km and 500m Product) and Gross Primary Production (GPP) field measurement at flux tower site in Thailand and Japan has been also developed. The success of both applications will contribute to maximize data utilization and improve accuracy of information by validate MODIS satellite products using high degree of accuracy and temporal measurement of field measurement data.
An open source web interface for linking models to infrastructure system databases
NASA Astrophysics Data System (ADS)
Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.
2016-12-01
Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.
Modern Data Center Services Supporting Science
NASA Astrophysics Data System (ADS)
Varner, J. D.; Cartwright, J.; McLean, S. J.; Boucher, J.; Neufeld, D.; LaRocque, J.; Fischman, D.; McQuinn, E.; Fugett, C.
2011-12-01
The National Oceanic and Atmospheric Administration's National Geophysical Data Center (NGDC) World Data Center for Geophysics and Marine Geology provides scientific stewardship, products and services for geophysical data, including bathymetry, gravity, magnetics, seismic reflection, data derived from sediment and rock samples, as well as historical natural hazards data (tsunamis, earthquakes, and volcanoes). Although NGDC has long made many of its datasets available through map and other web services, it has now developed a second generation of services to improve the discovery and access to data. These new services use off-the-shelf commercial and open source software, and take advantage of modern JavaScript and web application frameworks. Services are accessible using both RESTful and SOAP queries as well as Open Geospatial Consortium (OGC) standard protocols such as WMS, WFS, WCS, and KML. These new map services (implemented using ESRI ArcGIS Server) are finer-grained than their predecessors, feature improved cartography, and offer dramatic speed improvements through the use of map caches. Using standards-based interfaces allows customers to incorporate the services without having to coordinate with the provider. Providing fine-grained services increases flexibility for customers building custom applications. The Integrated Ocean and Coastal Mapping program and Coastal and Marine Spatial Planning program are two examples of national initiatives that require common data inventories from multiple sources and benefit from these modern data services. NGDC is also consuming its own services, providing a set of new browser-based mapping applications which allow the user to quickly visualize and search for data. One example is a new interactive mapping application to search and display information about historical natural hazards. NGDC continues to increase the amount of its data holdings that are accessible and is augmenting the capabilities with modern web application frameworks such as Groovy and Grails. Data discovery is being improved and simplified by leveraging ISO metadata standards along with ESRI Geoportal Server.
Easy access to geophysical data sets at the IRIS Data Management Center
NASA Astrophysics Data System (ADS)
Trabant, C.; Ahern, T.; Suleiman, Y.; Karstens, R.; Weertman, B.
2012-04-01
At the IRIS Data Management Center (DMC) we primarily manage seismological data but also have other geophysical data sets for related fields including atmospheric pressure and gravity measurements and higher level data products derived from raw data. With a few exceptions all data managed by the IRIS DMC are openly available and we serve an international research audience. These data are available via a number of different mechanisms from batch requests submitted through email, web interfaces, near real time streams and more recently web services. Our initial suite of web services offer access to almost all of the raw data and associated metadata managed at the DMC. In addition, we offer services that apply processing to the data before it is sent to the user. Web service technologies are ubiquitous with support available in nearly every programming language and operating system. By their nature web services are programmatic interfaces, but by choosing a simple subset of web service methods we make our data available to a very broad user base. These interfaces will be usable by professional developers as well as non-programmers. Whenever possible we chose open and recognized standards. The data returned to the user is in a variety of formats depending on type, including FDSN SEED, QuakeML, StationXML, ASCII, PNG images and in some cases where no appropriate standard could be found a customized XML format. To promote easy access to seismological data for all researchers we are coordinating with international partners to define web service interfaces standards. Additionally we are working with key partners in Europe to complete the initial implementation of these services. Once a standard has been adopted and implemented at multiple data centers researchers will be able to use the same request tools to access data across multiple data centers. The web services that apply on-demand processing to requested data include the capability to apply instrument corrections and format translations which ultimately allows more researchers to use the data without knowledge of specific data and metadata formats. In addition to serving as a new platform on top of which research scientists will build advanced processing tools we anticipate that they will result in more data being accessible by more users.
Clearing your Desk! Software and Data Services for Collaborative Web Based GIS Analysis
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Gichamo, T.; Yildirim, A. A.; Liu, Y.
2015-12-01
Can your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.
EarthServer: Visualisation and use of uncertainty as a data exploration tool
NASA Astrophysics Data System (ADS)
Walker, Peter; Clements, Oliver; Grant, Mike
2013-04-01
The Ocean Science/Earth Observation community generates huge datasets from satellite observation. Until recently it has been difficult to obtain matching uncertainty information for these datasets and to apply this to their processing. In order to make use of uncertainty information when analysing "Big Data" we need both the uncertainty itself (attached to the underlying data) and a means of working with the combined product without requiring the entire dataset to be downloaded. The European Commission FP7 project EarthServer (http://earthserver.eu) is addressing the problem of accessing and ad-hoc analysis of extreme-size Earth Science data using cutting-edge Array Database technology. The core software (Rasdaman) and web services wrapper (Petascope) allow huge datasets to be accessed using Open Geospatial Consortium (OGC) standard interfaces including the well established standards, Web Coverage Service (WCS) and Web Map Service (WMS) as well as the emerging standard, Web Coverage Processing Service (WCPS). The WCPS standard allows the running of ad-hoc queries on any of the data stored within Rasdaman, creating an infrastructure where users are not restricted by bandwidth when manipulating or querying huge datasets. The ESA Ocean Colour - Climate Change Initiative (OC-CCI) project (http://www.esa-oceancolour-cci.org/), is producing high-resolution, global ocean colour datasets over the full time period (1998-2012) where high quality observations were available. This climate data record includes per-pixel uncertainty data for each variable, based on an analytic method that classifies how much and which types of water are present in a pixel, and assigns uncertainty based on robust comparisons to global in-situ validation datasets. These uncertainty values take two forms, Root Mean Square (RMS) and Bias uncertainty, respectively representing the expected variability and expected offset error. By combining the data produced through the OC-CCI project with the software from the EarthServer project we can produce a novel data offering that allows the use of traditional exploration and access mechanisms such as WMS and WCS. However the real benefits can be seen when utilising WCPS to explore the data . We will show two major benefits to this infrastructure. Firstly we will show that the visualisation of the combined chlorophyll and uncertainty datasets through a web based GIS portal gives users the ability to instantaneously assess the quality of the data they are exploring using traditional web based plotting techniques as well as through novel web based 3 dimensional visualisation. Secondly we will showcase the benefits available when combining these data with the WCPS standard. The uncertainty data can be utilised in queries using the standard WCPS query language. This allows selection of data either for download or use within the query, based on the respective uncertainty values as well as the possibility of incorporating both the chlorophyll data and uncertainty data into complex queries to produce additional novel data products. By filtering with uncertainty at the data source rather than the client we can minimise traffic over the network allowing huge datasets to be worked on with a minimal time penalty.
Programmatic access to data and information at the IRIS DMC via web services
NASA Astrophysics Data System (ADS)
Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.
2011-12-01
The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned data. By using this library a developer will not need to learn the details of the service interfaces or understand the data formats returned. This library will be used to build the software bridge needed to request data and information from within MATLAB°. We also provide several client scripts written in Perl for the retrieval of waveform data, metadata and earthquake catalogs using command line programs. For more information on the DMC's web services please visit http://www.iris.edu/ws/
DEVSML 2.0: The Language and the Stack
2012-03-01
problems outside it. For example, HTML for web pages, Verilog and VHDL for hardware description, etc. are DSLs for very specific domains. A DSL can be...Engineering ( MDE ) paradigm where meta-modeling allows such transformations. The metamodeling approach to Model Integrated Computing (MIC) brings...University of Arizona, 2007 [5] Mittal, S, Martin, JLR, Zeigler, BP, "DEVS-Based Web Services for Net-centric T&E", Summer Computer Simulation
Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew
2017-08-01
ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.
OntologyWidget – a reusable, embeddable widget for easily locating ontology terms
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin
2007-01-01
Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506
OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin
2007-09-13
Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.
Multigraph: Interactive Data Graphs on the Web
NASA Astrophysics Data System (ADS)
Phillips, M. B.
2010-12-01
Many aspects of geophysical science involve time dependent data that is often presented in the form of a graph. Considering that the web has become a primary means of communication, there are surprisingly few good tools and techniques available for presenting time-series data on the web. The most common solution is to use a desktop tool such as Excel or Matlab to create a graph which is saved as an image and then included in a web page like any other image. This technique is straightforward, but it limits the user to one particular view of the data, and disconnects the graph from the data in a way that makes updating a graph with new data an often cumbersome manual process. This situation is somewhat analogous to the state of mapping before the advent of GIS. Maps existed only in printed form, and creating a map was a laborious process. In the last several years, however, the world of mapping has experienced a revolution in the form of web-based and other interactive computer technologies, so that it is now commonplace for anyone to easily browse through gigabytes of geographic data. Multigraph seeks to bring a similar ease of access to time series data. Multigraph is a program for displaying interactive time-series data graphs in web pages that includes a simple way of configuring the appearance of the graph and the data to be included. It allows multiple data sources to be combined into a single graph, and allows the user to explore the data interactively. Multigraph lets users explore and visualize "data space" in the same way that interactive mapping applications such as Google Maps facilitate exploring and visualizing geography. Viewing a Multigraph graph is extremely simple and intuitive, and requires no instructions. Creating a new graph for inclusion in a web page involves writing a simple XML configuration file and requires no programming. Multigraph can read data in a variety of formats, and can display data from a web service, allowing users to "surf" through large data sets, downloading only those the parts of the data that are needed for display. Multigraph is currently in use on several web sites including the US Drought Portal (www.drought.gov), the NOAA Climate Services Portal (www.climate.gov), the Climate Reference Network (www.ncdc.noaa.gov/crn), NCDC's State of the Climate Report (www.ncdc.noaa.gov/sotc), and the US Forest Service's Forest Change Assessment Viewer (ews.forestthreats.org/NPDE/NPDE.html). More information about Multigraph is available from the web site www.multigraph.org. Interactive Graph of Global Temperature Anomalies from ClimateWatch Magazine (http://www.climatewatch.noaa.gov/2009/articles/climate-change-global-temperature)
BookMeUp: Creating a Book Suggestion App
ERIC Educational Resources Information Center
Clark, Jason
2012-01-01
The rise of apps and mobile devices has opened the door to small, dedicated software programs that are focused on singular tasks. From the author's perspective as head of digital access and web service manager at Montana State University, these apps offered an opportunity to build a focused digital service aimed at allowing someone to enter a…
The interoperability skill of the Geographic Portal of the ISPRA - Geological Survey of Italy
NASA Astrophysics Data System (ADS)
Pia Congi, Maria; Campo, Valentina; Cipolloni, Carlo; Delogu, Daniela; Ventura, Renato; Battaglini, Loredana
2010-05-01
The Geographic Portal of Geological Survey of Italy (ISPRA) available at http://serviziogeologico.apat.it/Portal was planning according to standard criteria of the INSPIRE directive. ArcIMS services and at the same time WMS and WFS services had been realized to satisfy the different clients. For each database and web-services the metadata had been wrote in agreement with the ISO 19115. The management architecture of the portal allow it to encode the clients input and output requests both in ArcXML and in GML language. The web-applications and web-services had been realized for each database owner of Land Protection and Georesources Department concerning the geological map at the scale 1:50.000 (CARG Project) and 1:100.000, the IFFI landslide inventory, the boreholes due Law 464/84, the large-scale geological map and all the raster format maps. The portal thus far published is at the experimental stage but through the development of a new graphical interface achieves the final version. The WMS and WFS services including metadata will be re-designed. The validity of the methodology and the applied standards allow to look ahead to the growing developments. In addition to this it must be borne in mind that the capacity of the new geological standard language (GeoSciML), which is already incorporated in the web-services deployed, will be allow a better display and query of the geological data according to the interoperability. The characteristics of the geological data demand for the cartographic mapping specific libraries of symbols not yet available in a WMS service. This is an other aspect regards the standards of the geological informations. Therefore at the moment were carried out: - a library of geological symbols to be used for printing, with a sketch of system colors and a library for displaying data on video, which almost completely solves the problems of the coverage point and area data (also directed) but that still introduces problems for the linear data (solutions: ArcIMS services from Arcmap projects or a specific SLD implementation for WMS services); - an update of "Guidelines for the supply of geological data" in a short time will be published; - the Geological Survey of Italy is officially involved in the IUGS-CGI working group for the processing and experimentation on the new GeoSciML language with the WMS/WFS services. The availability of geographic informations occurs through the metadata that can be distributed online so that search engines can find them through specialized research. The collected metadata in catalogs are structured in a standard (ISO 19135). The catalogs are a ‘common' interface to locate, view and query data and metadata services, web services and other resources. Then, while working in a growing sector of the environmental knowledgement the focus is to collect the participation of other subjects that contribute to the enrichment of the informative content available, so as to be able to arrive to a real portal of national interest especially in case of disaster management.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
.... Purpose The Exchange is proposing to adopt the new QView service. QView is a web-based, front-end... dashboard interface. The dashboard also allows a QView subscriber to track his/her executions and open...-based tool that, among other things, allows users access to all of the BX order and execution...
Anon-Pass: Practical Anonymous Subscriptions
Lee, Michael Z.; Dunn, Alan M.; Katz, Jonathan; Waters, Brent; Witchel, Emmett
2014-01-01
We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch. Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins. We find that a central tension in an anonymous subscription service is the service provider’s desire for a long epoch (to reduce server-side computation) versus users’ desire for a short epoch (so they can repeatedly “re-anonymize” their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period. We measure performance of a research prototype of our protocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user. PMID:24504081
Anon-Pass: Practical Anonymous Subscriptions.
Lee, Michael Z; Dunn, Alan M; Katz, Jonathan; Waters, Brent; Witchel, Emmett
2013-12-31
We present the design, security proof, and implementation of an anonymous subscription service. Users register for the service by providing some form of identity, which might or might not be linked to a real-world identity such as a credit card, a web login, or a public key. A user logs on to the system by presenting a credential derived from information received at registration. Each credential allows only a single login in any authentication window, or epoch . Logins are anonymous in the sense that the service cannot distinguish which user is logging in any better than random guessing. This implies unlinkability of a user across different logins. We find that a central tension in an anonymous subscription service is the service provider's desire for a long epoch (to reduce server-side computation) versus users' desire for a short epoch (so they can repeatedly "re-anonymize" their sessions). We balance this tension by having short epochs, but adding an efficient operation for clients who do not need unlinkability to cheaply re-authenticate themselves for the next time period. We measure performance of a research prototype of our protocol that allows an independent service to offer anonymous access to existing services. We implement a music service, an Android-based subway-pass application, and a web proxy, and show that adding anonymity adds minimal client latency and only requires 33 KB of server memory per active user.
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Astrophysics Data System (ADS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-07-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X-ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning tool when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide-Field Xray Telescope (WFXT), as part of an end-to-end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control. This work was funded by AISRP grant NNG06GE59G.
Monitoring activities of satellite data processing services in real-time with SDDS Live Monitor
NASA Astrophysics Data System (ADS)
Duc Nguyen, Minh
2017-10-01
This work describes Live Monitor, the monitoring subsystem of SDDS - an automated system for space experiment data processing, storage, and distribution created at SINP MSU. Live Monitor allows operators and developers of satellite data centers to identify errors occurred in data processing quickly and to prevent further consequences caused by the errors. All activities of the whole data processing cycle are illustrated via a web interface in real-time. Notification messages are delivered to responsible people via emails and Telegram messenger service. The flexible monitoring mechanism implemented in Live Monitor allows us to dynamically change and control events being shown on the web interface on our demands. Physicists, whose space weather analysis models are functioning upon satellite data provided by SDDS, can use the developed RESTful API to monitor their own events and deliver customized notification messages by their needs.
Diy Geospatial Web Service Chains: Geochaining Make it Easy
NASA Astrophysics Data System (ADS)
Wu, H.; You, L.; Gui, Z.
2011-08-01
It is a great challenge for beginners to create, deploy and utilize a Geospatial Web Service Chain (GWSC). People in Computer Science are usually not familiar with geospatial domain knowledge. Geospatial practitioners may lack the knowledge about web services and service chains. The end users may lack both. However, integrated visual editing interfaces, validation tools, and oneclick deployment wizards may help to lower the learning curve and improve modelling skills so beginners will have a better experience. GeoChaining is a GWSC modelling tool designed and developed based on these ideas. GeoChaining integrates visual editing, validation, deployment, execution etc. into a unified platform. By employing a Virtual Globe, users can intuitively visualize raw data and results produced by GeoChaining. All of these features allow users to easily start using GWSC, regardless of their professional background and computer skills. Further, GeoChaining supports GWSC model reuse, meaning that an entire GWSC model created or even a specific part can be directly reused in a new model. This greatly improves the efficiency of creating a new GWSC, and also contributes to the sharing and interoperability of GWSC.
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.
2016-12-01
The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.
MyOcean Central Information System - Achievements and Perspectives
NASA Astrophysics Data System (ADS)
Claverie, Vincent; Loubrieu, Thomas; Jolibois, Tony; de Dianous, Rémi; Blower, Jon; Romero, Laia; Griffiths, Guy
2013-04-01
Since 2009, MyOcean (http://www.myocean.eu) is providing an operational service, for forecasts, analysis and expertise on ocean currents, temperature, salinity, sea level, primary ecosystems and ice coverage. The production of observation and forecasting data is done by 42 Production Units (PU). Product download and visualisation are hosted by 25 Dissemination Units (DU). All these products and associated services are gathered in a single catalogue hiding the intricate distributed organization of PUs and DUs. Besides applying INSPIRE directive and OGC recommendations, MyOcean overcomes technical choices and challenges. This presentation focuses on 3 specific issues met by MyOcean and relevant for many Spatial Data Infrastructures: user's transaction accounting, large volume download and stream line the catalogue maintenance. Transaction Accounting: Set up powerful means to get detailed knowledge of system usage in order to subsequently improve the products (ocean observations, analysis and forecast dataset) and services (view, download) offer. This subject drives the following ones: Central authentication management for the distributed web services implementations: add-on to THREDDS Data Server for WMS and NETCDF sub-setting service, specific FTP. Share user management with co-funding projects. In addition to MyOcean, alternate projects also need consolidated information about the use of the cofunded products. Provide a central facility for the user management. This central facility provides users' rights to geographically distributed services and gathers transaction accounting history from these distributed services. Propose a user-friendly web interface to download large volume of data (several GigaBytes) as robust as basic FTP but intuitive and file/directory independent. This should rely on a web service drafting the INSPIRE to-be specification and OGC recommendations for download taking into account that FTP server is not enough friendly (need to know filenames, directories) and Web-page not allowing downloading several files. Streamline the maintenance of the central catalogue. The major update for MyOcean v3 (April 2013) is the usage of Geonetwork for catalogue management. This improves the system at different levels : The editing interface is more user-friendly and the catalogue updates are managed in a workflow. This workflow allows higher flexibility for minor updates without giving up the high level qualification requirements for the catalogue content. The distributed web services (download, view) are automatically harvested from the THREDDS Data Server. Thus the manual editing on the catalogue is reduced, the associated typos are avoided and the quality of information is finally improved.
A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather
NASA Astrophysics Data System (ADS)
Phillips, M.; Ansari, S.; Del Greco, S.
2007-12-01
The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Lawrence
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and themore » environment relieving developers from implementing such details.« less
The JANA calibrations and conditions database API
NASA Astrophysics Data System (ADS)
Lawrence, David
2010-04-01
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.
DIY-style GIS service in mobile navigation system integrated with web and wireless GIS
NASA Astrophysics Data System (ADS)
Yan, Yongbin; Wu, Jianping; Fan, Caiyou; Wang, Minqi; Dai, Sheng
2007-06-01
Mobile navigation system based on handheld device can not only provide basic GIS services, but also enable these GIS services to be provided without location limit, to be more instantly interacted between users and devices. However, we still see that most navigation systems have common defects on user experience like limited map format, few map resources, and unable location share. To overcome the above defects, we propose DIY-style GIS service which provide users a more free software environment and allow uses to customize their GIS services. These services include defining geographical coordinate system of maps which helps to hugely enlarge the map source, editing vector feature, related property information and hotlink images, customizing covered area of download map via General Packet Radio Service (GPRS), and sharing users' location information via SMS (Short Message Service) which establishes the communication between users who needs GIS services. The paper introduces the integration of web and wireless GIS service in a mobile navigation system and presents an implementation sample of a DIY-Style GIS service in a mobile navigation system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ham, Timothy
2008-12-01
The JBEI Registry is a software to store and manage to a database of biological parts. It is intended to be used as a web service that is accessed via a web browser. It is also capable of running as a desktop program for a single user. The registry software stores, indexes, categories, and allows users to enter, search, retrieve, and contruct biological constructs in silico. It is also able to communicate with other Registries for data sharing and exchange.
Connecting long-tail scientists with big data centers using SaaS
NASA Astrophysics Data System (ADS)
Percivall, G. S.; Bermudez, L. E.
2012-12-01
Big data centers and long tail scientists represent two extremes in the geoscience research community. Interoperability and inter-use based on software-as-a-service (SaaS) increases access to big data holdings by this underserved community of scientists. Large, institutional data centers have long been recognized as vital resources in the geoscience community. Permanent data archiving and dissemination centers provide "access to the data and (are) a critical source of people who have experience in the use of the data and can provide advice and counsel for new applications." [NRC] The "long-tail of science" is the geoscience researchers that work separate from institutional data centers [Heidorn]. Long-tail scientists need to be efficient consumers of data from large, institutional data centers. Discussions in NSF EarthCube capture the challenges: "Like the vast majority of NSF-funded researchers, Alice (a long-tail scientist) works with limited resources. In the absence of suitable expertise and infrastructure, the apparently simple task that she assigns to her graduate student becomes an information discovery and management nightmare. Downloading and transforming datasets takes weeks." [Foster, et.al.] The long-tail metaphor points to methods to bridge the gap, i.e., the Web. A decade ago, OGC began building a geospatial information space using open, web standards for geoprocessing [ORM]. Recently, [Foster, et.al.] accurately observed that "by adopting, adapting, and applying semantic web and SaaS technologies, we can make the use of geoscience data as easy and convenient as consumption of online media." SaaS places web services into Cloud Computing. SaaS for geospatial is emerging rapidly building on the first-generation geospatial web, e.g., OGC Web Coverage Service [WCS] and the Data Access Protocol [DAP]. Several recent examples show progress in applying SaaS to geosciences: - NASA's Earth Data Coherent Web has a goal to improve science user experience using Web Services (e.g. W*S, SOAP, RESTful) to reduce barriers to using EOSDIS data [ECW]. - NASA's LANCE provides direct access to vast amounts of satellite data using the OGC Web Map Tile Service (WMTS). - NOAA's Unified Access Framework for Gridded Data (UAF Grid) is a web service based capability for direct access to a variety of datasets using netCDF, OPeNDAP, THREDDS, WMS and WCS. [UAF] Tools to access SaaS's are many and varied: some proprietary, others open source; some run in browsers, others are stand-alone applications. What's required is interoperability using web interfaces offered by the data centers. NOAA's UAF service stack supports Matlab, ArcGIS, Ferret, GrADS, Google Earth, IDV, LAS. Any SaaS that offers OGC Web Services (WMS, WFS, WCS) can be accessed by scores of clients [OGC]. While there has been much progress in the recent year toward offering web services for the long-tail of scientists, more needs to be done. Web services offer data access but more than access is needed for inter-use of data, e.g. defining data schemas that allow for data fusion, addressing coordinate systems, spatial geometry, and semantics for observations. Connecting long-tail scientists with large, data centers using SaaS and, in the future, semantic web, will address this large and currently underserved user community.
Flexible Web services integration: a novel personalised social approach
NASA Astrophysics Data System (ADS)
Metrouh, Abdelmalek; Mokhati, Farid
2018-05-01
Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.
DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.
2017-12-01
DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.
Mercury- Distributed Metadata Management, Data Discovery and Access System
NASA Astrophysics Data System (ADS)
Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.
2007-12-01
Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.
Web-based network analysis and visualization using CellMaps
Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín
2016-01-01
Summary: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. Availability and Implementation: The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps. The client is implemented in JavaScript and the server in C and Java. Contact: jdopazo@cipf.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27296979
Web-based network analysis and visualization using CellMaps.
Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín
2016-10-01
: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps The client is implemented in JavaScript and the server in C and Java. jdopazo@cipf.es Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.
Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick
2013-01-01
Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.
OneGeology-Europe: architecture, portal and web services to provide a European geological map
NASA Astrophysics Data System (ADS)
Tellez-Arenas, Agnès.; Serrano, Jean-Jacques; Tertre, François; Laxton, John
2010-05-01
OneGeology-Europe is a large ambitious project to make geological spatial data further known and accessible. The OneGeology-Europe project develops an integrated system of data to create and make accessible for the first time through the internet the geological map of the whole of Europe. The architecture implemented by the project is web services oriented, based on the OGC standards: the geological map is not a centralized database but is composed by several web services, each of them hosted by a European country involved in the project. Since geological data are elaborated differently from country to country, they are difficult to share. OneGeology-Europe, while providing more detailed and complete information, will foster even beyond the geological community an easier exchange of data within Europe and globally. This implies an important work regarding the harmonization of the data, both model and the content. OneGeology-Europe is characterised by the high technological capacity of the EU Member States, and has the final goal to achieve the harmonisation of European geological survey data according to common standards. As a direct consequence Europe will make a further step in terms of innovation and information dissemination, continuing to play a world leading role in the development of geosciences information. The scope of the common harmonized data model was defined primarily by the requirements of the geological map of Europe, but in addition users were consulted and the requirements of both INSPIRE and ‘high-resolution' geological maps were considered. The data model is based on GeoSciML, developed since 2006 by a group of Geological Surveys. The data providers involved in the project implemented a new component that allows the web services to deliver the geological map expressed into GeoSciML. In order to capture the information describing the geological units of the map of Europe the scope of the data model needs to include lithology; age; genesis and metamorphic character. For high resolution maps physical properties, bedding characteristics and weathering also need to be added. Furthermore, Geological data held by national geological surveys is generally described in national language of the country. The project has to deal with the multilingual issue, an important requirement of the INSPIRE directive. The project provides a list of harmonized vocabularies, a set of web services to deal with them, and a web site for helping the geoscientists while mapping the terms used into the national datasets into these vocabularies. The web services provided by each data provider, with the particular component that allows them to deliver the harmonised data model and to handle the multilingualism, are the first part of the architecture. The project also implements a web portal that provides several functionalities. Thanks to the common data model implemented by each web service delivering a part of the geological map, and using OGC SLD standards, the client offers the following option. A user can request for a sub-selection of the map, for instance searching on a particular attribute such as "age is quaternary", and display only the parts of the map according to the filter. Using the web services on the common vocabularies, the data displayed are translated. The project started September 2008 for two years, with 29 partners from 20 countries (20 partners are Geological Surveys). The budget is 3.25 M€, with a European Commission contribution of 2.6 M€. The paper will describe the technical solutions to implement OneGeology-Europe components: the profile of the common data model to exchange geological data, the web services to view and access geological data; and a geoportal to provide the user with a user-friendly way to discover, view and access geological data.
46 CFR 310.7 - Federal student subsistence allowances and student incentive payments.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., marine engineering, and tug and barge companies. The above list is not all inclusive and is only intended... to submit their Service Obligation Compliance Report forms (MA-930) to MARAD using the web-based...
46 CFR 310.7 - Federal student subsistence allowances and student incentive payments.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., marine engineering, and tug and barge companies. The above list is not all inclusive and is only intended... to submit their Service Obligation Compliance Report forms (MA-930) to MARAD using the web-based...
Update to a guide to standardized highway lighting pole hardware.
DOT National Transportation Integrated Search
2013-03-01
This report describes the development of an updated Online Guide to Luminaire Supports. The Guide is a web-based content : management system for luminaire support systems that allows full viewing, submission, management, and reporting services : to i...
BetterThanPin: Empowering Users to Fight Phishing (Poster)
NASA Astrophysics Data System (ADS)
Tan, Teik Guan
The BetterThanPin concept is an online security service that allows users to enable almost any Cloud or Web-based account (e.g. Gmail, MSN, Yahoo, etc) to be protected with "almost" 2-factor authentication (2FA). The result is that users can now protect their online accounts with better authentication, without waiting for the service or cloud provider.
77 FR 15369 - Mobility Fund Phase I Auction GIS Data of Potentially Eligible Census Blocks
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-15
....fcc.gov/auctions/901/ , are the following: Downloadable shapefile Web mapping service MapBox map tiles... GIS software allows you to add this service as a layer to your session or project. 6. MapBox map tiles are cached map tiles of the data. With this open source software approach, these image tiles can be...
A Prototype Web-based system for GOES-R Space Weather Data
NASA Astrophysics Data System (ADS)
Sundaravel, A.; Wilkinson, D. C.
2010-12-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application framework based on the Groovy language. We are also making use of the THREDDS (Thematic Realtime Environmental Distributed Data Services) server to publish and access the NetCDF files. We have completed developing software tools to generate NetCDF and CSV data files and also tools to translate NetCDF to CSV. The current phase of the project involves in designing and developing the web interface.
NASA Astrophysics Data System (ADS)
Ahern, T. K.; Ekstrom, G.; Grobbelaer, M.; Trabant, C. M.; Van Fossen, M.; Stults, M.; Tsuboi, S.; Beaudoin, B. C.; Bondar, I.
2016-12-01
Seismology, by its very nature, requires sharing information across international boundaries and as such seismology evolved as a science that promotes free and open access to data. The International Federation of Digital Seismograph Networks (FDSN) has commission status within IASPEI and as such is the international standards body in our community. In the late 1980s a domain standard for exchanging seismological information was created and the SEED format is still the dominant domain standard. More recently the FDSN standardized web-service interfaces for key services used in our community. The standardization of these services also enabled the development of a federation of data centers. These federated centers, can be accessed through standard FDSN service calls. Client software exists that currently allows seamless and transparent access to all data managed at 14 globally distributed data centers on three continents with plans to expand this more broadly. IRIS is also involved in the EarthCube project funded by the US National Science Foundation. The GEOphysical Web Services (GeoWS) project extended the style of web services endorsed by the FDSN to interdisciplinary domains. IRIS worked with five data centers in other domains (Caltech, UCSD, Columbia University, UNAVCO and Unidata) to develop `similar' service-based interfaces to their data systems that were drawn from the oceanographic, atmospheric, and solid earth divisions within the NSF's geosciences directorate. Additionally IRIS developed GeoWS style web services for six additional data collections that included magnetic observations, field gravity measurements, superconducting gravimetry data, volcano monitoring data, tidal data, and oceanographic observations including those from cabled arrays in the ocean. This presentation will highlight the success the FDSN and GeoWS services have demonstrated within and beyond seismology as well as identifying some next steps being considered.
Data Sets and Data Services at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.
2014-12-01
The Northern California Earthquake Data Center (NCEDC) houses a unique and comprehensive data archive and provides real-time services for a variety of seismological and geophysical data sets that encompass northern and central California. We have over 80 terabytes of continuous and event-based time series data from broadband, short-period, strong motion, and strain sensors as well as continuous and campaign GPS data at both standard and high sample rates in both raw and RINEX format. The Northen California Seismic System (NCSS), operated by UC Berkeley and USGS Menlo Park, has recorded over 890,000 events from 1984 to the present, and the NCEDC provides catalog, parametric information, moment tensors and first motion mechanisms, and time series data for these events. We also host and provide event catalogs, parametric information, and event waveforms for DOE enhanced geothermal system monitoring in northern California and Nevada. The NCEDC provides a variety of ways for users to access these data. The most recent development are web services, which provide interactive, command-line, or program-based workflow access to data. Web services use well-established server and client protocols and RESTful software architecture that allow users to easily submit queries and receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC supports all FDSN-defined web services as well as a number of IRIS-defined and NCEDC-defined services. We also continue to support older email-based and browser-based access to data. NCEDC data and web services can be found at http://www.ncedc.org and http://service.ncedc.org.
Northern California Earthquake Data Center: Data Sets and Data Services
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Allen, R. M.; Zuzlewski, S.
2015-12-01
The Northern California Earthquake Data Center (NCEDC) provides a permanent archive and real-time data distribution services for a unique and comprehensive data set of seismological and geophysical data sets encompassing northern and central California. We provide access to over 85 terabytes of continuous and event-based time series data from broadband, short-period, strong motion, and strain sensors as well as continuous and campaign GPS data at both standard and high sample rates. The Northen California Seismic System (NCSS), operated by UC Berkeley and USGS Menlo Park, has recorded over 900,000 events from 1984 to the present, and the NCEDC serves catalog, parametric information, moment tensors and first motion mechanisms, and time series data for these events. We also serve event catalogs, parametric information, and event waveforms for DOE enhanced geothermal system monitoring in northern California and Nevada. The NCEDC provides a several ways for users to access these data. The most recent development are web services, which provide interactive, command-line, or program-based workflow access to data. Web services use well-established server and client protocols and RESTful software architecture that allow users to easily submit queries and receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC supports all FDSN-defined web services as well as a number of IRIS-defined and NCEDC-defined services. We also continue to support older email-based and browser-based access to data. NCEDC data and web services can be found at http://www.ncedc.org and http://service.ncedc.org.
Proposal for a Web Encoding Service (wes) for Spatial Data Transactio
NASA Astrophysics Data System (ADS)
Siew, C. B.; Peters, S.; Rahman, A. A.
2015-10-01
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.
A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.
Huang, Chih-Yuan; Wu, Cheng-Hung
2016-08-31
The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.
Caching strategies for improving performance of web-based Geographic applications
NASA Astrophysics Data System (ADS)
Liu, M.; Brodzik, M.; Collins, J. A.; Lewis, S.; Oldenburg, J.
2012-12-01
The NASA Operation IceBridge mission collects airborne remote sensing measurements to bridge the gap between NASA's Ice, Cloud and Land Elevation Satellite (ICESat) mission and the upcoming ICESat-2 mission. The IceBridge Data Portal from the National Snow and Ice Data Center provides an intuitive web interface for accessing IceBridge mission observations and measurements. Scientists and users usually do not have knowledge about the individual campaigns but are interested in data collected in a specific place. We have developed a high-performance map interface to allow users to quickly zoom to an area of interest and see any Operation IceBridge overflights. The map interface consists of two layers: the user can pan and zoom on the base map layer; the flight line layer that overlays the base layer provides all the campaign missions that intersect with the current map view. The user can click on the flight campaigns and download the data as needed. The OpenGIS® Web Map Service Interface Standard (WMS) provides a simple HTTP interface for requesting geo-registered map images from one or more distributed geospatial databases. Web Feature Service (WFS) provides an interface allowing requests for geographical features across the web using platform-independent calls. OpenLayers provides vector support (points, polylines and polygons) to build a WMS/WFS client for displaying both layers on the screen. Map Server, an open source development environment for building spatially enabled internet applications, is serving the WMS and WFS spatial data to OpenLayers. Early releases of the portal displayed unacceptably poor load time performance for flight lines and the base map tiles. This issue was caused by long response times from the map server in generating all map tiles and flight line vectors. We resolved the issue by implementing various caching strategies on top of the WMS and WFS services, including the use of Squid (www.squid-cache.org) to cache frequently-used content. Our presentation includes the architectural design of the application, and how we use OpenLayers, WMS and WFS with Squid to build a responsive web application capable of efficiently displaying geospatial data to allow the user to quickly interact with the displayed information. We describe the design, implementation and performance improvement of our caching strategies, and the tools and techniques developed to assist our data caching strategies.
Datacube as a Service to Exploit the Full Potential of Data Cloudy Distributed
NASA Astrophysics Data System (ADS)
Mantovani, S.; Natali, S.; Barboni, D.; Hogan, P.; Baumann, P.; Clements, O.
2017-12-01
For almost half a century satellite platforms devoted to Earth Observation have allowed creating a complete description of the global environment, generating hundreds of Petabytes of data. The continuous increase of data availability (and respective data volume), together with the raised awareness of climate change issues, have made people of any kind (from citizens to decision makers to scientists) sensitive to environmental threats, improving their inclination to invest on monitoring and mitigation activities. Recently, the term "datacube" has received increasing attention for its potential to simplify the provision of "Big Earth Data" services, allowing massive spatio-temporal data in an analysis-ready way. A number of datacube-ready platforms have emerged to enable a new collaborative approach to analyse the vast amount of satellite imagery and other Earth Observation data, making quicker and easier to explore a time series of images stored in global or regional datacubes. In this context, the European Space Agency and European Commission H2020-funded projects ([1], [2]) are bringing together multiple organisations in Europe, Australia and United States to allow federated data holdings to be analysed using web-based access to petabytes of multidimensional geospatial datasets. In this study, we provide an overview of the existing datacubes (EarthServer-2 datacubes, Sentinel Datacube, European and Australian Landsat Datacubes, …), how the regional datacube structures differ each other, how datacubes interoperability is achieved through OpenSearch and Web Coverage Service (WCS) standards, and finally how the datacube contents can be visualized on a virtual globe (ESA-NASA WebWorldWind) based on a WC(P)S query, and how data can be manipulated on the fly through web-based interfaces, such as Jupyter notebook. The current study is co-financed by the European Space Agency under the MaaS project (ESRIN Contract No. 4000114186/15/I-LG) and the European Union's Horizon 2020 research and innovation programme under the EarthServer-2 project (Grant Agreement No. 654367). [1] MEA as a Service (http://eodatacube.eu) [2] EarthServer-2 (http://www.earthserver.eu)
NASA Astrophysics Data System (ADS)
Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.
2010-12-01
We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The research described here was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy
2005-11-03
maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.
The CARMEN software as a service infrastructure.
Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim
2013-01-28
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.
Li, Shirley; Kuo, Mu-Hsing; Ryan, David
2016-01-01
A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.
Lightweight Advertising and Scalable Discovery of Services, Datasets, and Events Using Feedcasts
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Movva, S.
2010-12-01
Broadcast feeds (Atom or RSS) are a mechanism for advertising the existence of new data objects on the web, with metadata and links to further information. Users then subscribe to the feed to receive updates. This concept has already been used to advertise the new granules of science data as they are produced (datacasting), with browse images and metadata, and to advertise bundles of web services (service casting). Structured metadata is introduced into the XML feed format by embedding new XML tags (in defined namespaces), using typed links, and reusing built-in Atom feed elements. This “infocasting” concept can be extended to include many other science artifacts, including data collections, workflow documents, topical geophysical events (hurricanes, forest fires, etc.), natural hazard warnings, and short articles describing a new science result. The common theme is that each infocast contains machine-readable, structured metadata describing the object and enabling further manipulation. For example, service casts contain type links pointing to the service interface description (e.g., WSDL for SOAP services), service endpoint, and human-readable documentation. Our Infocasting project has three main goals: (1) define and evangelize micro-formats (metadata standards) so that providers can easily advertise their web services, datasets, and topical geophysical events by adding structured information to broadcast feeds; (2) develop authoring tools so that anyone can easily author such service advertisements, data casts, and event descriptions; and (3) provide a one-stop, Google-like search box in the browser that allows discovery of service, data and event casts visible on the web, and services & data registered in the GEOSS repository and other NASA repositories (GCMD & ECHO). To demonstrate the event casting idea, a series of micro-articles—with accompanying event casts containing links to relevant datasets, web services, and science analysis workflows--will be authored for several kinds of geophysical events, such as hurricanes, smoke plume events, tsunamis, etc. The talk will describe our progress so far, and some of the issues with leveraging existing metadata standards to define lightweight micro-formats.
Dynamic selection mechanism for quality of service aware web services
NASA Astrophysics Data System (ADS)
D'Mello, Demian Antony; Ananthanarayana, V. S.
2010-02-01
A web service is an interface of the software component that can be accessed by standard Internet protocols. The web service technology enables an application to application communication and interoperability. The increasing number of web service providers throughout the globe have produced numerous web services providing the same or similar functionality. This necessitates the use of tools and techniques to search the suitable services available over the Web. UDDI (universal description, discovery and integration) is the first initiative to find the suitable web services based on the requester's functional demands. However, the requester's requirements may also include non-functional aspects like quality of service (QoS). In this paper, the authors define a QoS model for QoS aware and business driven web service publishing and selection. The authors propose a QoS requirement format for the requesters, to specify their complex demands on QoS for the web service selection. The authors define a tree structure called quality constraint tree (QCT) to represent the requester's variety of requirements on QoS properties having varied preferences. The paper proposes a QoS broker based architecture for web service selection, which facilitates the requesters to specify their QoS requirements to select qualitatively optimal web service. A web service selection algorithm is presented, which ranks the functionally similar web services based on the degree of satisfaction of the requester's QoS requirements and preferences. The paper defines web service provider qualities to distinguish qualitatively competitive web services. The paper also presents the modelling and selection mechanism for the requester's alternative constraints defined on the QoS. The authors implement the QoS broker based system to prove the correctness of the proposed web service selection mechanism.
NASA Astrophysics Data System (ADS)
Kilb, D. L.; Fundis, A. T.; Risien, C. M.
2012-12-01
The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.
Dynamic User Interfaces for Service Oriented Architectures in Healthcare.
Schweitzer, Marco; Hoerbst, Alexander
2016-01-01
Electronic Health Records (EHRs) play a crucial role in healthcare today. Considering a data-centric view, EHRs are very advanced as they provide and share healthcare data in a cross-institutional and patient-centered way adhering to high syntactic and semantic interoperability. However, the EHR functionalities available for the end users are rare and hence often limited to basic document query functions. Future EHR use necessitates the ability to let the users define their needed data according to a certain situation and how this data should be processed. Workflow and semantic modelling approaches as well as Web services provide means to fulfil such a goal. This thesis develops concepts for dynamic interfaces between EHR end users and a service oriented eHealth infrastructure, which allow the users to design their flexible EHR needs, modeled in a dynamic and formal way. These are used to discover, compose and execute the right Semantic Web services.
Web based aphasia test using service oriented architecture (SOA)
NASA Astrophysics Data System (ADS)
Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.
2007-11-01
Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.
Army Communicator. Volume 36, Number 2, Summer 2011
2011-06-01
Portal was utilized to provide an integrated platform for visualization of operational data derived from the CIDNE SIGACT database. Content staging...information across multiple platforms and services. Preliminary efforts during my tour were underway with SIGACT web services that allow data ...all content and data in a CM system, and the CM system itself, is fully protected against accidental deletion or corruption. Conclusion As we
ERIC Educational Resources Information Center
Karsenti, Thierry; Garnier, Yves Daniel
2002-01-01
A pilot project that created a school Web site and e-mail service demonstrated the importance of information technology to Montreal (Quebec) 4th-grade students and their teachers. E-mail allowed teachers to have more efficient and less time-consuming communications with parents and allowed parents to more closely engage in their children's school…
NASA Astrophysics Data System (ADS)
Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.
2007-12-01
The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through standards which promote interoperability. Our GIS related products provide Open Geospatial Consortium (OGC) compliant Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Federal Geographic Data Committee (FGDC) metadata as a complement to the map viewers. KML/KMZ data files (soon to be compliant OGC specifications) also provide access.
iDEAS: A web-based system for dry eye assessment.
Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G
2016-07-01
Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
rasdaman Array Database: current status
NASA Astrophysics Data System (ADS)
Merticariu, George; Toader, Alexandru
2015-04-01
rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which defines request types for inserting, updating and deleting coverages. A web client, designed for both novice and experienced users, is also available for the service and its extensions. The client offers an intuitive interface that allows users to work with multi-dimensional coverages by abstracting the specifics of the standard definitions of the requests. The Web Coverage Processing Service defines a language for on-the-fly processing and filtering multi-dimensional raster coverages. rasdaman exposes this service through the WCS processing extension. Demonstrations are provided online via the Earthlook website (earthlook.org) which presents use-cases from a wide variety of application domains, using the rasdaman system as processing engine.
EnviroAtlas - Austin, TX - Tree Cover Configuration and Connectivity, Water Background Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The EnviroAtlas Austin, TX tree cover configuration and connectivity map categorizes forest land cover into structural elements (e.g. core, edge, connector, etc.). In this community, Forest is defined as Trees & Forest (Trees & Forest - 40 = 1; All Else = 0). Water was considered background (value 129) during the analysis to create this dataset, however it has been converted into value 10 to distinguish it from land area background. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Semantic Integration for Marine Science Interoperability Using Web Technologies
NASA Astrophysics Data System (ADS)
Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.
2008-12-01
The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example exactMatch, narrowerThan, and subClassOf. VINE can compute inferred mappings based on the given associations. Attributes about each mapping, like comments and a confidence level, can also be included. VINE also supports registering and storing resulting mapping files in the Ontology Registry. The presentation will describe the application of semantic technologies in general, and our planned applications in particular, to solve data management problems in the marine and environmental sciences.
Cloud-based MOTIFSIM: Detecting Similarity in Large DNA Motif Data Sets.
Tran, Ngoc Tam L; Huang, Chun-Hsi
2017-05-01
We developed the cloud-based MOTIFSIM on Amazon Web Services (AWS) cloud. The tool is an extended version from our web-based tool version 2.0, which was developed based on a novel algorithm for detecting similarity in multiple DNA motif data sets. This cloud-based version further allows researchers to exploit the computing resources available from AWS to detect similarity in multiple large-scale DNA motif data sets resulting from the next-generation sequencing technology. The tool is highly scalable with expandable AWS.
Molray--a web interface between O and the POV-Ray ray tracer.
Harris, M; Jones, T A
2001-08-01
A publicly available web-based interface is presented for producing high-quality ray-traced images and movies from the molecular-modelling program O [Jones et al. (1991), Acta Cryst. A47, 110-119]. The interface allows the user to select O-plot files and set parameters to create standard input files for the popular ray-tracing renderer POV-Ray, which can then produce publication-quality still images or simple movies. To ensure ease of use, we have made this service available to the O user community via the World Wide Web. The public Molray server is available at http://xray.bmc.uu.se/molray.
Session management for web-based healthcare applications.
Wei, L.; Sengupta, S.
1999-01-01
In health care systems, users may access multiple applications during one session of interaction with the system. However, users must sign on to each application individually, and it is difficult to maintain a common context among these applications. We are developing a session management system for web-based applications using LDAP directory service, which will allow single sign-on to multiple web-based applications, and maintain a common context among those applications for the user. This paper discusses the motivations for building this system, the system architecture, and the challenges of our approach, such as the session objects management for the user, and session security. PMID:10566511
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
Adapting the CUAHSI Hydrologic Information System to OGC standards
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Whitenack, T.; Zaslavsky, I.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) provides web and desktop client access to hydrologic observations via water data web services using an XML schema called “WaterML”. The WaterML 1.x specification and the corresponding Water Data Services have been the backbone of the HIS service-oriented architecture (SOA) and have been adopted for serving hydrologic data by several federal agencies and many academic groups. The central discovery service, HIS Central, is based on an metadata catalog that references 4.7 billion observations, organized as 23 million data series from 1.5 million sites from 51 organizations. Observations data are published using HydroServer nodes that have been deployed at 18 organizations. Usage of HIS has increased by 8x from 2008 to 2010, and doubled in usage from 1600 data series a day in 2009 to 3600 data series a day in the first half of 2010. The HIS central metadata catalog currently harvests information from 56 Water Data Services. We collaborate on the catalog updates with two federal partners, USGS and US EPA: their data series are periodically reloaded into the HIS metadata catalog. We are pursuing two main development directions in the HIS project: Cloud-based computing, and further compliance with Open Geospatial Consortium (OGC) standards. The goal of moving to cloud-computing is to provide a scalable collaborative system with a simpler deployment and less dependence of hardware maintenance and staff. This move requires re-architecting the information models underlying the metadata catalog, and Water Data Services to be independent of the underlying relational database model, allowing for implementation on both relational databases, and cloud-based processing systems. Cloud-based HIS central resources can be managed collaboratively; partners share responsibility for their metadata by publishing data series information into the centralized catalog. Publishing data series will use REST-based service interfaces, like OData, as the basis for ingesting data series information into a cloud-hosted catalog. The future HIS services involve providing information via OGC Standards that will allow for observational data access from commercial GIS applications. Use of standards will allow for tools to access observational data from other projects using standards, such as the Ocean Observatories Initiative, and for tools from such projects to be integrated into the HIS toolset. With international collaborators, we have been developing a water information exchange language called “WaterML 2.0” which will be used to deliver observations data over OGC Sensor Observation Services (SOS). A software stack of OGC standard services will provide access to HIS information. In addition to SOS, Web Mapping and Feature Services (WMS, and WFS) will provide access to location information. Catalog Services for the Web (CSW) will provide a catalog for water information that is both centralized, and distributed. We intend the OGC standards supplement the existing HIS service interfaces, rather than replace the present service interfaces. The ultimate goal of this development is expand access to hydrologic observations data, and create an environment where these data can be seamlessly integrated with standards-compliant data resources.
Cross-Dataset Analysis and Visualization Driven by Expressive Web Services
NASA Astrophysics Data System (ADS)
Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad
2015-04-01
The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization. The application offers a set of features to visualize and cross-compare the datasets. Users can select a region of interest in space and time on which an aerosol map layer is plotted. Hovmoeller time-latitude and time-longitude profiles can be displayed by selecting orthogonal cross-sections on the globe. Statistics about the selected dataset are also displayed in different text and plot formats. The datasets can also be cross-compared either by using the delta map tool or the merged map tool. For more advanced users, a WCPS query console is also offered allowing users to process their data with ad-hoc queries and then choose how to display the results. Overall, the user has a rich set of tools that can be used to visualize and cross-compare the aerosol datasets. With our application we have shown how the NASA WorldWind framework can be used to display results processed efficiently - and entirely - on the server side using the expressiveness of the OGC WCPS web-service. The application serves not only as a proof of concept of a new paradigm in working with large geospatial data but also as an useful tool for environmental data analysts.
Metadata Authoring with Versatility and Extensibility
NASA Technical Reports Server (NTRS)
Pollack, Janine; Olsen, Lola
2004-01-01
NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2008-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2009-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Schuurman, Nadine; Leight, Margo; Berube, Myriam
2008-01-01
Background The creation of successful health policy and location of resources increasingly relies on evidence-based decision-making. The development of intuitive, accessible tools to analyse, display and disseminate spatial data potentially provides the basis for sound policy and resource allocation decisions. As health services are rationalized, the development of tools such graphical user interfaces (GUIs) is especially valuable at they assist decision makers in allocating resources such that the maximum number of people are served. GIS can used to develop GUIs that enable spatial decision making. Results We have created a Web-based GUI (wGUI) to assist health policy makers and administrators in the Canadian province of British Columbia make well-informed decisions about the location and allocation of time-sensitive service capacities in rural regions of the province. This tool integrates datasets for existing hospitals and services, regional populations and road networks to allow users to ascertain the percentage of population in any given service catchment who are served by a specific health service, or baskets of linked services. The wGUI allows policy makers to map trauma and obstetric services against rural populations within pre-specified travel distances, illustrating service capacity by region. Conclusion The wGUI can be used by health policy makers and administrators with little or no formal GIS training to visualize multiple health resource allocation scenarios. The GUI is poised to become a critical decision-making tool especially as evidence is increasingly required for distribution of health services. PMID:18793428
EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data
NASA Astrophysics Data System (ADS)
Clements, Oliver; Walker, Peter
2017-04-01
The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The extracted data is returned as structured data (for instance a Python array) which can then be passed directly to local processing code. We will highlight how the libraries can be used by the community and integrated into existing systems, for instance by the use of Jupyter notebooks to share Python code examples which can then be used by other researchers as a basis for their own work.
An Open Source Tool to Test Interoperability
NASA Astrophysics Data System (ADS)
Bermudez, L. E.
2012-12-01
Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and description of performing local tests. It will also provide information about how to participate in the open source code development of TEAM Engine.
Mobile and Accessible Learning for MOOCs
ERIC Educational Resources Information Center
Sharples, Mike; Kloos, Carlos Delgado; Dimitriadis, Yannis; Garlatti, Serge; Specht, Marcus
2015-01-01
Many modern web-based systems provide a "responsive" design that allows material and services to be accessed on mobile and desktop devices, with the aim of providing "ubiquitous access." Besides offering access to learning materials such as podcasts and videos across multiple locations, mobile, wearable and ubiquitous…
Providing Multi-Page Data Extraction Services with XWRAPComposer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ling; Zhang, Jianjun; Han, Wei
2008-04-30
Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services
NASA Astrophysics Data System (ADS)
Shah, M.; Verma, Y.; Nandakumar, R.
2012-07-01
Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.
New SPDF Directions and Evolving Services Supporting Heliophysics Research
NASA Technical Reports Server (NTRS)
McGuire, Robert E.; Candey, Robert M.; Bilitza, D.; Chimiak, Reine A.; Cooper, John F.; Fung, Shing F.; Han, David B.; Harris, Bernie; Johnson R.; Klipsch, C.;
2006-01-01
The next advances in Heliophysics science and its paradigm of a Great Observatory require an increasingly integrated and transparent data environment, where data can be easily accessed and used across the boundaries of both missions and traditional disciplines. The Space Physics Data Facility (SPDF) project includes uniquely important multi-mission data services with current data from most operating space physics missions. This paper reviews the capabilities of key services now available and the directions in which they are expected to evolve to enable future multi-mission correlative research. The Coordinated Data Analysis Web (CDAWeb) and Satellite Situation Center Web (SSCWeb), critically supported by the Common Data Format (CDF) effort and supplemented by more focused science services such as OMNIWeb and technical services such as data format translations are important operational capabilities serving the international community today (and cited last year by 20% of the papers published in JGR Space Physics). These services continue to add data from most current missions as SPDF works with new missions such as THEMIS to help enable their unique science goals and the meaningful sharing of their data in a multi-mission correlative context. Recent enhancements to CDF, our 3D Java interactive orbit viewer (TIPSOD), the CDAWeb Plus system, increasing automation of data service population, the new folding of the VSPO effort into SPDF and our continuing thrust towards fully-functional web services APIs to allow ready invocation from distributed external middleware and clients will be shown.
MedlinePlus Connect: Web Service
... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service ...
WEB-IS2: Next Generation Web Services Using Amira Visualization Package
NASA Astrophysics Data System (ADS)
Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.
2003-12-01
Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the services (Erlebacher, G., Yuen, D.A., and F. Dubuffet, Current trends and demands in visualization in the geosciences, Electron. Geosciences, 4, 2001).
WebDMS: A Web-Based Data Management System for Environmental Data
NASA Astrophysics Data System (ADS)
Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.
2015-12-01
DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.
Focused Crawling of the Deep Web Using Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Liu, L; Critchlow, T
2004-06-21
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
T-RMSD: a web server for automated fine-grained protein structural classification.
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-07-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd.
T-RMSD: a web server for automated fine-grained protein structural classification
Magis, Cedrik; Di Tommaso, Paolo; Notredame, Cedric
2013-01-01
This article introduces the T-RMSD web server (tree-based on root-mean-square deviation), a service allowing the online computation of structure-based protein classification. It has been developed to address the relation between structural and functional similarity in proteins, and it allows a fine-grained structural clustering of a given protein family or group of structurally related proteins using distance RMSD (dRMSD) variations. These distances are computed between all pairs of equivalent residues, as defined by the ungapped columns within a given multiple sequence alignment. Using these generated distance matrices (one per equivalent position), T-RMSD produces a structural tree with support values for each cluster node, reminiscent of bootstrap values. These values, associated with the tree topology, allow a quantitative estimate of structural distances between proteins or group of proteins defined by the tree topology. The clusters thus defined have been shown to be structurally and functionally informative. The T-RMSD web server is a free website open to all users and available at http://tcoffee.crg.cat/apps/tcoffee/do:trmsd. PMID:23716642
A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability
Huang, Chih-Yuan; Wu, Cheng-Hung
2016-01-01
The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759
The Osseus platform: a prototype for advanced web-based distributed simulation
NASA Astrophysics Data System (ADS)
Franceschini, Derrick; Riecken, Mark
2016-05-01
Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.
Sharing on Web 3d Models of Ancient Theatres. a Methodological Workflow
NASA Astrophysics Data System (ADS)
Scianna, A.; La Guardia, M.; Scaduto, M. L.
2016-06-01
In the last few years, the need to share on the Web the knowledge of Cultural Heritage (CH) through navigable 3D models has increased. This need requires the availability of Web-based virtual reality systems and 3D WEBGIS. In order to make the information available to all stakeholders, these instruments should be powerful and at the same time very user-friendly. However, research and experiments carried out so far show that a standardized methodology doesn't exist. All this is due both to complexity and dimensions of geometric models to be published, on the one hand, and to excessive costs of hardware and software tools, on the other. In light of this background, the paper describes a methodological approach for creating 3D models of CH, freely exportable on the Web, based on HTML5 and free and open source software. HTML5, supporting the WebGL standard, allows the exploration of 3D spatial models using most used Web browsers like Chrome, Firefox, Safari, Internet Explorer. The methodological workflow here described has been tested for the construction of a multimedia geo-spatial platform developed for three-dimensional exploration and documentation of the ancient theatres of Segesta and of Carthage, and the surrounding landscapes. The experimental application has allowed us to explore the potential and limitations of sharing on the Web of 3D CH models based on WebGL standard. Sharing capabilities could be extended defining suitable geospatial Web-services based on capabilities of HTML5 and WebGL technology.
The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities
NASA Astrophysics Data System (ADS)
Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Kassin, A.; Villarreal, S.; Barba, M.; Dover, M.; Escarzaga, S. M.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.
2015-12-01
Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress over such a large expanse of the Arctic. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and other related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 7700 observation sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. During 2015, the web mapping application has been enhanced by the addition of a query builder that allows users to create rich and complex queries. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. Substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies, the AOV data set has been structured and centralized within a relational database and the application front-end has been ported to HTML5 to enable mobile access. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches and an administration web based data management system that allows administrators to add, update, and delete information in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but interoperable resources in this way will help to ensure improved capacities for conducting activities such as assessing the status of arctic observing efforts, optimizing logistic operations, and for quickly accessing external and project-focused web resources for more detailed information and access to scientific data and derived products.
Ultrabroadband photonic internet: safety aspects
NASA Astrophysics Data System (ADS)
Kalicki, Arkadiusz; Romaniuk, Ryszard
2008-11-01
Web applications became most popular medium in the Internet. Popularity, easiness of web application frameworks together with careless development results in high number of vulnerabilities and attacks. There are several types of attacks possible because of improper input validation. SQL injection is ability to execute arbitrary SQL queries in a database through an existing application. Cross-site scripting is the vulnerability which allows malicious web users to inject code into the web pages viewed by other users. Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains malicious request. Web spam in blogs. There are several techniques to mitigate attacks. Most important are web application strong design, correct input validation, defined data types for each field and parameterized statements in SQL queries. Server hardening with firewall, modern security policies systems and safe web framework interpreter configuration are essential. It is advised to keep proper security level on client side, keep updated software and install personal web firewalls or IDS/IPS systems. Good habits are logging out from services just after finishing work and using even separate web browser for most important sites, like e-banking.
Ms2lda.org: web-based topic modelling for substructure discovery in mass spectrometry.
Wandy, Joe; Zhu, Yunfeng; van der Hooft, Justin J J; Daly, Rónán; Barrett, Michael P; Rogers, Simon
2017-09-14
We recently published MS2LDA, a method for the decomposition of sets of molecular fragment data derived from large metabolomics experiments. To make the method more widely available to the community, here we present ms2lda.org, a web application that allows users to upload their data, run MS2LDA analyses and explore the results through interactive visualisations. Ms2lda.org takes tandem mass spectrometry data in many standard formats and allows the user to infer the sets of fragment and neutral loss features that co-occur together (Mass2Motifs). As an alternative workflow, the user can also decompose a dataset onto predefined Mass2Motifs. This is accomplished through the web interface or programmatically from our web service. The website can be found at http://ms2lda.org , while the source code is available at https://github.com/sdrogers/ms2ldaviz under the MIT license. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Provenance-Based Approaches to Semantic Web Service Discovery and Usage
ERIC Educational Resources Information Center
Narock, Thomas William
2012-01-01
The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…
A resource-oriented architecture for a Geospatial Web
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo; Nativi, Stefano
2010-05-01
In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary, systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping applications); • Successful Web and Web 2.0 applications - search engines, feeds, social network - could be integrated/replicated in the Geospatial Web; The main drawbacks would be the following: • The Uniform Interface simplifies the overall system architecture (e.g. no service registry, and service descriptors required), but moves the complexity to the data representation. Moreover since the interface must stay generic, it results really simple and therefore complex interactions would require several transfers. • In the geospatial domain one of the most valuable resources are processes (e.g. environmental models). How they can be modeled as resources accessed through the common interface is an open issue. Taking into account advantages and drawback it seems that a Geospatial Web would be useful, but its use would be limited to specific use-cases not covering all the possible applications. The Geospatial Web architecture could be partly based on existing specifications, while other aspects need investigation. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Fielding 2000] Fielding, R. T. 2000. Architectural styles and the design of network-based software architectures. PhD Dissertation. Dept. of Information and Computer Science, University of California, Irvine
Assessing uncertain human exposure to ambient air pollution using environmental models in the Web
NASA Astrophysics Data System (ADS)
Gerharz, L. E.; Pebesma, E.; Denby, B.
2012-04-01
Ambient air quality can have significant impact on human health by causing respiratory and cardio-vascular diseases. Thereby, the pollutant concentration a person is exposed to can differ considerably between individuals depending on their daily routine and movement patterns. Using a straight forward approach this exposure can be estimated by integration of individual space-time paths and spatio-temporally resolved ambient air quality data. To allow a realistic exposure assessment, it is furthermore important to consider uncertainties due to input and model errors. In this work, we present a generic, web-based approach for estimating individual exposure by integration of uncertain position and air quality information implemented as a web service. Following the Model Web initiative envisioning an infrastructure for deploying, executing and chaining environmental models as services, existing models and data sources for e.g. air quality, can be used to assess exposure. Therefore, the service needs to deal with different formats, resolutions and uncertainty representations provided by model or data services. Potential mismatch can be accounted for by transformation of uncertainties and (dis-)aggregation of data under consideration of changes in the uncertainties using components developed in the UncertWeb project. In UncertWeb, the Model Web vision is extended to an Uncertainty-enabled Model Web, where services can process and communicate uncertainties in the data and models. The propagation of uncertainty to the exposure results is quantified using Monte Carlo simulation by combining different realisations of positions and ambient concentrations. Two case studies were used to evaluate the developed exposure assessment service. In a first study, GPS tracks with a positional uncertainty of a few meters, collected in the urban area of Münster, Germany were used to assess exposure to PM10 (particulate matter smaller 10 µm). Air quality data was provided by an uncertainty-enabled air quality model system which provided realisations of concentrations per hour on a 250 m x 250 m resolved grid over Münster. The second case study uses modelled human trajectories in Rotterdam, The Netherlands. The trajectories were provided as realisations in 15 min resolution per 4 digit postal code from an activity model. Air quality estimates were provided for different pollutants as ensembles by a coupled meteorology and air quality model system on a 1 km x 1 km grid with hourly resolution. Both case studies show the successful application of the service to different resolutions and uncertainty representations.
36 CFR 219.54 - Filing an objection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... or regulation. (2) Forest Service Directive System documents and land management plans or other... the objection process. (b) Including documents by reference is not allowed, except for the following... relevant section of the cited document. All other documents or Web links to those documents, or both must...
36 CFR 219.54 - Filing an objection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... or regulation. (2) Forest Service Directive System documents and land management plans or other... the objection process. (b) Including documents by reference is not allowed, except for the following... relevant section of the cited document. All other documents or Web links to those documents, or both must...
Android Based Mobile Environment for Moodle Users
ERIC Educational Resources Information Center
de Clunie, Gisela T.; Clunie, Clifton; Castillo, Aris; Rangel, Norman
2013-01-01
This paper is about the development of a platform that eases, throughout Android based mobile devices, mobility of users of virtual courses at Technological University of Panama. The platform deploys computational techniques such as "web services," design patterns, ontologies and mobile technologies to allow mobile devices communicate…
A BPMN solution for chaining OGC services to quality assure location-based crowdsourced data
NASA Astrophysics Data System (ADS)
Meek, Sam; Jackson, Mike; Leibovici, Didier G.
2016-02-01
The Open Geospatial Consortium (OGC) Web Processing Service (WPS) standard enables access to a centralized repository of processes and services from compliant clients. A crucial part of the standard includes the provision to chain disparate processes and services to form a reusable workflow. To date this has been realized by methods such as embedding XML requests, using Business Process Execution Language (BPEL) engines and other external orchestration engines. Although these allow the user to define tasks and data artifacts as web services, they are often considered inflexible and complicated, often due to vendor specific solutions and inaccessible documentation. This paper introduces a new method of flexible service chaining using the standard Business Process Markup Notation (BPMN). A prototype system has been developed upon an existing open source BPMN suite to illustrate the advantages of the approach. The motivation for the software design is qualification of crowdsourced data for use in policy-making. The software is tested as part of a project that seeks to qualify, assure, and add value to crowdsourced data in a biological monitoring use case.
An ontological knowledge framework for adaptive medical workflow.
Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir
2008-10-01
As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.
NASA Astrophysics Data System (ADS)
Nelson, J.; Ames, D. P.; Jones, N.; Tarboton, D. G.; Li, Z.; Qiao, X.; Crawley, S.
2016-12-01
As water resources data continue to move to the web in the form of well-defined, open access, machine readable web services provided by government, academic, and private institutions, there is increased opportunity to move additional parts of the water science workflow to the web (e.g. analysis, modeling, decision support, and collaboration.) Creating such web-based functionality can be extremely time-consuming and resource-intensive and can lead the erstwhile water scientist down a veritable cyberinfrastructure rabbit hole, through an unintended tunnel of transformation to become a Cyber-Wonderland software engineer. We posit that such transformations were never the intention of the research programs that fund earth science cyberinfrastructure, nor is it in the best interest of water researchers to spend exorbitant effort developing and deploying such technologies. This presentation will introduce a relatively simple and ready-to-use water science web app environment funded by the National Science Foundation that couples the new HydroShare data publishing system with the Tethys Platform web app development toolkit. The coupled system has already been shown to greatly lower the barrier to deploying of web based visualization and analysis tools for the CUAHSI Water Data Center and for the National Weather Service's National Water Model. The design and implementation of the developed web app architecture will be presented together key examples of existing apps created using this system. In each of the cases presented, water resources students with basic programming skills were able to develop and deploy highly functional web apps in a relatively short period of time (days to weeks) - allowing the focus to remain on water science rather on cyberinfrastructure. This presentation is accompanied by an open invitation for new collaborations that use the HydroShare-Tethys web app environment.
NASA Astrophysics Data System (ADS)
Carrera, F.; Schloss, A. L.; Guerin, S.; Beaudry, J.; Pickle, J.
2011-12-01
Dozens of web-based initiatives allow citizens to provide information to programs that monitor the health of our environment. A concerned citizen can participate on-line as a weather "spotter", provide important phenological information to national databases, update bird counts in the area, or record the freezing of ponds, and much more. Many of these programs are developing mobile apps as companion tools to their web sites. Our group was involved in the development of one such companion app as an adjunct to the Picture Post project web site. Digital Earth Watch (DEW) and the Picture Post network support environmental monitoring through repeat digital photography and satellite imagery. A Picture Post is an eight-sided platform on a stand-alone post for taking a panoramic series of photographs. By taking pictures on a regular basis at Picture Post sites and by sharing these pictures on the program's web site (housed at the University of New Hampshire), citizen scientists are creating a photographic library of change-over-time in their local area and contributing to national monitoring programs. Our DEW Android application simplifies participation by allowing users to upload pictures instantly from their smart phone. The app also removes the constraint of the physical picture post, by allowing users to create a virtual post anywhere in the world. Posts have been set up to monitor trails, forests, water, wetlands, gardens and landscapes. The app uses the phone's GPS to position the virtual post in its geographic location and guides the user through the orientations thanks to the internal accelerometers and compass. To aid in the before-and-after comparison of images taken from the same orientation, the DEW app displays an "onionskin" of the prior image overlayed onto the camera viewfinder. With the transparent onionskin as a guide, the user can align the images more accurately, thus allowing differences between pictures to be detectable and measurable. The app interacts with the UNH server via APIs (Application Programming Interfaces) that were created to allow bi-directional machine-to-machine interaction between the mobile device and the web site. Thus, the principal functions that a user can perform on the web site, such as finding post sites on a map and viewing and adding picture sets, are available on the smartphone. The development of the APIs makes it now possible not only to communicate with our own mobile app, but, more importantly, it opens the door for other computer systems to directly interact with our server. Our ongoing discussions with the National Phenology Network and Project Budburst, have highlighted the potential (and perhaps the need) for the creation of a distributed web-service architecture whereby each national program exposes its key functionalities not only to their own mobile phone apps, but also to other organizations, in a federated system of servers, all supporting citizen-based digital earth watch programs.
WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets
NASA Astrophysics Data System (ADS)
Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.
2010-12-01
WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.
Sharing knowledge of Planetary Datasets through the Web-Based PRoGIS
NASA Astrophysics Data System (ADS)
Giordano, M. G.; Morley, J. M.; Muller, J. P. M.; Barnes, R. B.; Tao, Y. T.
2015-10-01
The large amount of raw and derived data available from various planetary surface missions (e.g. Mars and Moon in our case) has been integrated withco-registered and geocoded orbital image data to provide rover traverses and camera site locations in universal global co-ordinates [1]. This then allows an integrated GIS to use these geocoded products for scientific applications: we aim to create a web interface, PRoGIS, with minimal controls focusing on the usability and visualisation of the data, to allow planetary geologists to share annotated surface observations. These observations in a common context are shared between different tools and software (PRoGIS, Pro3D, 3D point cloud viewer). Our aim is to use only Open Source components that integrate Open Web Services for planetary data to make available an universal platform with a WebGIS interface, as well as a 3D point cloud and a Panorama viewer to explore derived data. On top of these tools we are building capabilities to make and share annotations amongst users. We use Python and Django for the server-side framework and Open Layers 3 for the WebGIS client. For good performance previewing 3D data (point clouds, pictures on the surface and panoramas) we employ ThreeJS, a WebGL Javascript library. Additionally, user and group controls allow scientists to store and share their observations. PRoGIS not only displays data but also launches sophisticated 3D vision reprocessing (PRoVIP) and an immersive 3D analysis environment (PRo3D).
NASA Astrophysics Data System (ADS)
Paulraj, D.; Swamynathan, S.; Madhaiyan, M.
2012-11-01
Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.
Eminovic, Nina; Wyatt, Jeremy C; Tarpey, Aideen M; Murray, Gerard; Ingrams, Grant J
2004-06-02
NHS Direct is a telephone triage service used by the UK public to contact a nurse for any kind of health problem. NHS Direct Online (NHSDO) extends NHS Direct, allowing the telephone to be replaced by the Internet, and introducing new opportunities for informing patients about their health. One NHSDO service under development is the Clinical Enquiry Service (CES), which uses Web chat as the communication medium. To identify the opportunities and possible risks of such a service by exploring its safety, feasibility, and patient perceptions about using Web chat to contact a nurse. During a six-day pilot performed in an inner-city general practice in Coventry, non-urgent patients attending their GP were asked to test the service. After filling out three Web forms, patients used a simple Web chat application to communicate with trained NHS Direct triage nurses, who responded with appropriate triage advice. All patients were seen by their GP immediately after using the Web chat service. Safety was explored by comparing the nurse triage end point with the GP's recommended end point. In order to check the feasibility of the service, we measured the duration of the chat session. Patient perceptions were measured before and after using the service through a modified Telemedicine Perception Questionnaire (TMPQ) instrument. All patients were observed by a researcher who captured any comments and, if necessary, to assisted with the process. A total of 25 patients (mean age 48 years; 57% female) agreed to participate in the study. An exact match between the nurse and the GP end point was found in 45% (10/22) of cases. In two cases, the CES nurse proposed a less urgent end point than the GP. The median duration of Web chat sessions was 30 minutes, twice the median for NHS Direct telephone calls for 360 patients with similar presenting problems. There was a significant improvement in patients' perception of CES after using the service (mean pre-test TMPQ score 44/60, post-test 49/60; p=0.008 (2-tailed)). Patients volunteered several potential advantages of CES, such as the ability to re-read the answers from the nurse. Patients consider CES a useful addition to regular care, but not a replacement for it. Based on this pilot, we can conclude that CES was sufficiently safe to continue piloting, but in order to make further judgments about safety, more tests with urgent cases should be performed. The Web chat sessions as conducted were too long and therefore too expensive to be sustainable in the NHS. However, the positive reaction from patients and the potential of CES for specific patient groups (the deaf, shy, or socially isolated) encourage us to continue with piloting such innovative communication methods with the public.
Automatic geospatial information Web service composition based on ontology interface matching
NASA Astrophysics Data System (ADS)
Xu, Xianbin; Wu, Qunyong; Wang, Qinmin
2008-10-01
With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.
Online Maps and Cloud-Supported Location-Based Services across a Manifold of Devices
NASA Astrophysics Data System (ADS)
Kröpfl, M.; Buchmüller, D.; Leberl, F.
2012-07-01
Online mapping, miniaturization of computing devices, the "cloud", Global Navigation Satellite System (GNSS) and cell tower triangulation all coalesce into an entirely novel infrastructure for numerous innovative map applications. This impacts the planning of human activities, navigating and tracking these activities as they occur, and finally documenting their outcome for either a single user or a network of connected users in a larger context. In this paper, we provide an example of a simple geospatial application making use of this model, which we will use to explain the basic steps necessary to deploy an application involving a web service hosting geospatial information and a client software consuming the web service through an API. The application allows an insurance claim specialist to add claims to a cloud-based database including a claim location. A field agent then uses a smartphone application to query the database by proximity, and heads out to capture photographs as supporting documentation for the claim. Once the photos have been uploaded to the web service, a second web service for image matching is called in order to try and match the current photograph to previously submitted assets. Image matching is used as a pre-verification step to determine whether the coverage of the respective object is sufficient for the claim specialist to process the claim. The development of the application was based on Microsoft's® Bing Maps™, Windows Phone™, Silverlight™, Windows Azure™ and Visual Studio™, and was completed in approximately 30 labour hours split among two developers.
SCHeMA web-based observation data information system
NASA Astrophysics Data System (ADS)
Novellino, Antonio; Benedetti, Giacomo; D'Angelo, Paolo; Confalonieri, Fabio; Massa, Francesco; Povero, Paolo; Tercier-Waeber, Marie-Louise
2016-04-01
It is well recognized that the need of sharing ocean data among non-specialized users is constantly increasing. Initiatives that are built upon international standards will contribute to simplify data processing and dissemination, improve user-accessibility also through web browsers, facilitate the sharing of information across the integrated network of ocean observing systems; and ultimately provide a better understanding of the ocean functioning. The SCHeMA (Integrated in Situ Chemical MApping probe) Project is developing an open and modular sensing solution for autonomous in situ high resolution mapping of a wide range of anthropogenic and natural chemical compounds coupled to master bio-physicochemical parameters (www.schema-ocean.eu). The SCHeMA web system is designed to ensure user-friendly data discovery, access and download as well as interoperability with other projects through a dedicated interface that implements the Global Earth Observation System of Systems - Common Infrastructure (GCI) recommendations and the international Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards. This approach will insure data accessibility in compliance with major European Directives and recommendations. Being modular, the system allows the plug-and-play of commercially available probes as well as new sensor probess under development within the project. The access to the network of monitoring probes is provided via a web-based system interface that, being implemented as a SOS (Sensor Observation Service), is providing standard interoperability and access tosensor observations systems through O&M standard - as well as sensor descriptions - encoded in Sensor Model Language (SensorML). The use of common vocabularies in all metadatabases and data formats, to describe data in an already harmonized and common standard is a prerequisite towards consistency and interoperability. Therefore, the SCHeMA SOS has adopted the SeaVox common vocabularies populated by SeaDataNet network of National Oceanographic Data Centres. The SCHeMA presentation layer, a fundamental part of the software architecture, offers to the user a bidirectional interaction with the integrated system allowing to manage and configure the sensor probes; view the stored observations and metadata, and handle alarms. The overall structure of the web portal developed within the SCHeMA initiative (Sensor Configuration, development of Core Profile interface for data access via OGC standard, external services such as web services, WMS, WFS; and Data download and query manager) will be presented and illustrated with examples of ongoing tests in costal and open sea.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration.
Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602
BioSWR – Semantic Web Services Registry for Bioinformatics
Repchevsky, Dmitry; Gelpi, Josep Ll.
2014-01-01
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118
BioSWR--semantic web services registry for bioinformatics.
Repchevsky, Dmitry; Gelpi, Josep Ll
2014-01-01
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
Integrating the Web and continuous media through distributed objects
NASA Astrophysics Data System (ADS)
Labajo, Saul P.; Garcia, Narciso N.
1998-09-01
The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.
Sealife: a semantic grid browser for the life sciences applied to the study of infectious diseases.
Schroeder, Michael; Burger, Albert; Kostkova, Patty; Stevens, Robert; Habermann, Bianca; Dieng-Kuntz, Rose
2006-01-01
The objective of Sealife is the conception and realisation of a semantic Grid browser for the life sciences, which will link the existing Web to the currently emerging eScience infrastructure. The SeaLife Browser will allow users to automatically link a host of Web servers and Web/Grid services to the Web content he/she is visiting. This will be accomplished using eScience's growing number of Web/Grid Services and its XML-based standards and ontologies. The browser will identify terms in the pages being browsed through the background knowledge held in ontologies. Through the use of Semantic Hyperlinks, which link identified ontology terms to servers and services, the SeaLife Browser will offer a new dimension of context-based information integration. In this paper, we give an overview over the different components of the browser and their interplay. This SeaLife Browser will be demonstrated within three application scenarios in evidence-based medicine, literature & patent mining, and molecular biology, all relating to the study of infectious diseases. The three applications vertically integrate the molecule/cell, the tissue/organ and the patient/population level by covering the analysis of high-throughput screening data for endocytosis (the molecular entry pathway into the cell), the expression of proteins in the spatial context of tissue and organs, and a high-level library on infectious diseases designed for clinicians and their patients. For more information see http://www.biote.ctu-dresden.de/sealife.
Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition
Ha, Weitao; Zhang, Guojun
2013-01-01
With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431
Reliable execution based on CPN and skyline optimization for Web service composition.
Chen, Liping; Ha, Weitao; Zhang, Guojun
2013-01-01
With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.
Aerts, Jozef
2017-01-01
RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.
Development of web tools to disseminate space geodesy data-related products
NASA Astrophysics Data System (ADS)
Soudarin, Laurent; Ferrage, Pascale; Mezerette, Adrien
2015-04-01
In order to promote the products of the DORIS system, the French Space Agency CNES has developed and implemented on the web site of the International DORIS Service (IDS) a set of plot tools to interactively build and display time series of site positions, orbit residuals and terrestrial parameters (scale, geocenter). An interactive global map is also available to select sites, and to get access to their information. Besides the products provided by the CNES Orbitography Team and the IDS components, these tools allow comparing time evolutions of coordinates for collocated DORIS and GNSS stations, thanks to the collaboration with the Terrestrial Frame Combination Center of the International GNSS Service (IGS). A database was created to improve robustness and efficiency of the tools, with the objective to propose a complete web service to foster data exchange with the other geodetic services of the International Association of Geodesy (IAG). The possibility to visualize and compare position time series of the four main space geodetic techniques DORIS, GNSS, SLR and VLBI is already under way at the French level. A dedicated version of these web tools has been developed for the French Space Geodesy Research Group (GRGS). It will give access to position time series provided by the GRGS Analysis Centers involved in DORIS, GNSS, SLR and VLBI data processing for the realization of the International Terrestrial Reference Frame. In this presentation, we will describe the functionalities of these tools, and we will address some aspects of the time series (content, format).
Tools for Interdisciplinary Data Assimilation and Sharing in Support of Hydrologic Science
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Walker, J.; Suftin, I.; Warren, M.; Kunicki, T.
2013-12-01
Information consumed and produced in hydrologic analyses is interdisciplinary and massive. These factors put a heavy information management burden on the hydrologic science community. The U.S. Geological Survey (USGS) Office of Water Information Center for Integrated Data Analytics (CIDA) seeks to assist hydrologic science investigators with all-components of their scientific data management life cycle. Ongoing data publication and software development projects will be presented demonstrating publically available data access services and manipulation tools being developed with support from two Department of the Interior initiatives. The USGS-led National Water Census seeks to provide both data and tools in support of nationally consistent water availability estimates. Newly available data include national coverages of radar-indicated precipitation, actual evapotranspiration, water use estimates aggregated by county, and South East region estimates of streamflow for 12-digit hydrologic unit code watersheds. Web services making these data available and applications to access them will be demonstrated. Web-available processing services able to provide numerous streamflow statistics for any USGS daily flow record or model result time series and other National Water Census processing tools will also be demonstrated. The National Climate Change and Wildlife Science Center is a USGS center leading DOI-funded academic global change adaptation research. It has a mission goal to ensure data used and produced by funded projects is available via web services and tools that streamline data management tasks in interdisciplinary science. For example, collections of downscaled climate projections, typically large collections of files that must be downloaded to be accessed, are being published using web services that allow access to the entire dataset via simple web-service requests and numerous processing tools. Recent progress on this front includes, data web services for Climate Model Intercomparison Phase 5 based downscaled climate projections, EPA's Integrated Climate and Land Use Scenarios projections of population and land cover metrics, and MODIS-derived land cover parameters from NASA's Land Processes Distributed Active Archive Center. These new services and ways to discover others will be presented through demonstration of a recently open-sourced project from a web-application or scripted workflow. Development and public deployment of server-based processing tools to subset and summarize these and other data is ongoing at the CIDA with partner groups such as 52 Degrees North and Unidata. The latest progress on subsetting, spatial summarization to areas of interest, and temporal summarization via common-statistical methods will be presented.
Business Value of Information Sharing and the Role of Emerging Technologies
ERIC Educational Resources Information Center
Kumar, Sanjeev
2009-01-01
Information Technology has brought significant benefits to organizations by allowing greater information sharing within and across firm boundaries leading to performance improvements. Emerging technologies such as Service Oriented Architecture (SOA) and Web2.0 have transformed the volume and process of information sharing. However, a comprehensive…
Development of a virtual multidisciplinary lung cancer tumor board in a community setting.
Stevenson, Marvaretta M; Irwin, Tonia; Lowry, Terry; Ahmed, Maleka Z; Walden, Thomas L; Watson, Melanie; Sutton, Linda
2013-05-01
Creating an effective platform for multidisciplinary tumor conferences can be challenging in the rural community setting. The Duke Cancer Network created an Internet-based platform for a multidisciplinary conference to enhance the care of patients with lung cancer. This conference incorporates providers from different physical locations within a rural community and affiliated providers from a university-based cancer center 2 hours away. An electronic Web conferencing tool connects providers aurally and visually. Conferences were set up using a commercially available Web conferencing platform. The video platform provides a secure Web site coupled with a secure teleconference platform to ensure patient confidentiality. Multiple disciplines are invited to participate, including radiology, radiation oncology, thoracic surgery, pathology, and medical oncology. Participants only need telephone access and Internet connection to participate. Patient histories and physicals are presented, and the Web conferencing platform allows radiologic and histologic images to be reviewed. Treatment plans for patients are discussed, allowing providers to coordinate care among the different subspecialties. Patients who need referral to the affiliated university-based cancer center for specialized services are identified. Pertinent treatment guidelines and journal articles are reviewed. On average, there are 10 participants with one to two cases presented per session. The use of a Web conferencing platform allows subspecialty providers throughout the community and hours away to discuss lung cancer patient cases. This platform increases convenience for providers, eliminating travel to a central location. Coordination of care for patients requiring multidisciplinary care is facilitated, shortening evaluation time before definitive treatment plan.
Web service module for access to g-Lite
NASA Astrophysics Data System (ADS)
Goranova, R.; Goranov, G.
2012-10-01
G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.
Automating Visualization Service Generation with the WATT Compiler
NASA Astrophysics Data System (ADS)
Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.
2007-12-01
As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.
mod_bio: Apache modules for Next-Generation sequencing data.
Lindenbaum, Pierre; Redon, Richard
2015-01-01
We describe mod_bio, a set of modules for the Apache HTTP server that allows the users to access and query fastq, tabix, fasta and bam files through a Web browser. Those data are made available in plain text, HTML, XML, JSON and JSON-P. A javascript-based genome browser using the JSON-P communication technique is provided as an example of cross-domain Web service. https://github.com/lindenb/mod_bio. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Saada: A Generator of Astronomical Database
NASA Astrophysics Data System (ADS)
Michel, L.
2011-11-01
Saada transforms a set of heterogeneous FITS files or VOtables of various categories (images, tables, spectra, etc.) in a powerful database deployed on the Web. Databases are located on your host and stay independent of any external server. This job doesn’t require writing code. Saada can mix data of various categories in multiple collections. Data collections can be linked each to others making relevant browsing paths and allowing data-mining oriented queries. Saada supports 4 VO services (Spectra, images, sources and TAP) . Data collections can be published immediately after the deployment of the Web interface.
First experience with the new .cern Top Level Domain
NASA Astrophysics Data System (ADS)
Alvarez, E.; Malo de Molina, M.; Salwerowicz, M.; Silva De Sousa, B.; Smith, T.; Wagner, A.
2017-10-01
In October 2015, CERN’s core website has been moved to a new address, http://home.cern, marking the launch of the brand new top-level domain .cern. In combination with a formal governance and registration policy, the IT infrastructure needed to be extended to accommodate the hosting of Web sites in this new top level domain. We will present the technical implementation in the framework of the CERN Web Services that allows to provide virtual hosting, a reverse proxy solution and that also includes the provisioning of SSL server certificates for secure communications.
Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd
2014-01-01
Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.
Web Services for Astronomical Databases: Connecting AIPS++ to the Virtual Observatory
NASA Astrophysics Data System (ADS)
Douthit, M. C.
2002-12-01
In the year 2010, the NRAO will be operating four of the world's most powerful radio telescopes: GBT, EVLA, VLBA, and ALMA (with international partnership). Multi-Terabyte data sets will quickly accumulate with a rate of twenty-five to fifty Megabytes of data per second generated by ALMA and EVLA each. It will be imperative for scientists to possess software capable of automated data reduction, image synthesis, and archiving. With the evolution of AIPS++ and the recently developed concepts of the image pipeline, the participation of the NRAO in the virtual observatories of the future is now on the horizon giving birth to the need for fast archive access and web service development in AIPS++. When the software package began over 10 years ago, it was not designed for data transfer via the web. In response to the demands of the NVO, we have designed and implemented an application layer that will allow our system to communicate with others. Sponsored by the NRAO and California State University, San Marcos.
Web Services--A Buzz Word with Potentials
János T. Füstös
2006-01-01
The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...
BioServices: a common Python package to access biological Web Services programmatically.
Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio
2013-12-15
Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.
NASA Astrophysics Data System (ADS)
Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong
2017-09-01
To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.
Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung
2017-09-22
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.
Chong, Ilyoung
2017-01-01
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590
NASA Astrophysics Data System (ADS)
Boler, F.; Meertens, C.
2012-04-01
The UNAVCO Data Center in Boulder, Colorado, archives for preservation and distributes geodesy data and products in the GNSS, InSAR, and LiDAR domains to the scientific and education community. The GNSS data, which in addition to geodesy are useful for tectonic, volcanologic, ice mass, glacial isostatic adjustment, meteorological and other studies, come from 2,500 continuously operating stations and 8000 survey-mode observation points around the globe that are operated by over 100 U.S. and international members of the UNAVCO consortium. SAR data, which are in many ways complementary to the GNSS data collection have been acquired in concert with the WInSAR Consortium activities and with EarthScope, with a focus on the western United States. UNAVCO also holds a growing collection of terrestrial laser scanning data. Several partner US geodesy data centers, along with UNAVCO, have developed and are in the process of implementing the Geodesy Seamless Archive Centers, a web services based technology to facilitate the exchange of metadata and delivery of data and products to users. These services utilize a repository layer implemented at each data center, and a service layer to identify and present any data center-specific services and capabilities, allowing simplified vertical federation of metadata from independent data centers. UNAVCO also has built web services for SAR data discovery and delivery, and will partner with other SAR data centers and institutions to provide access for the InSAR scientist to SAR data and ancillary data sets, web services to produce interferograms, and mechanisms to archive and distribute resulting higher level products. Improved access to LiDAR data from space-based, airborne, and terrestrial platforms through utilization of web services is similarly currently under development. These efforts in cyberinfrastructure, while initially aimed at intra-domain data sharing and providing products for research and education, are envisioned as potentially serving as the basis for leveraging integrated access across a broad set of Earth science domains.
NASA Astrophysics Data System (ADS)
Rinaldi, Giovanni; Gaddi, Antonio; Cicero, Arrigo; Bonsanto, Fabio; Carnevali, Lucio
The elderly care is characterized by integration of health and social care, is performed by several different agencies, requires a continuum of assistance and not episodic approach. These considerations has led us to overcome the traditional ICT approach done by several applications connected with property mechanism towards new solutions proposed by web2.0. We think that a federative platform can support the elderly needs (detected through focus groups) providing them web2.0 solutions in health care. The main features of the federation are: publishing/subscribe, messaging and signaling. These dimensions allows elderly to participate actively in their own healthcare through the access to their own health and social record, the composition of their own health and social space, and the participation to social networks for remaining socially active. Moreover, the process of dis-intermediation is also allowed. More emphasis is posed in the federative platform for the governance of the services as requested by elderly.
Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service
Hatano, Kenji; Ohe, Kazuhiko
2003-01-01
Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364
Lyles, Courtney Rees; Harris, Lynne T; Le, Tung; Flowers, Jan; Tufano, James; Britt, Diane; Hoath, James; Hirsch, Irl B; Goldberg, Harold I; Ralston, James D
2011-05-01
Drawing on previous web-based diabetes management programs based on the Chronic Care Model, we expanded an intervention to include care management through mobile phones and a game console web browser. The pilot intervention enrolled eight diabetes patients from the University of Washington in Seattle into a collaborative care program: connecting them to a care provider specializing in diabetes, providing access to their full electronic medical record, allowing wireless glucose uploads and e-mail with providers, and connecting them to the program's web services through a game system. To evaluate the study, we conducted qualitative thematic analysis of semistructured interviews. Participants expressed frustrations with using the cell phones and the game system in their everyday lives, but liked the wireless system for collaborating with a provider on uploaded glucoses and receiving automatic feedback on their blood sugar trends. A majority of participants also expressed that their participation in the trial increased their health awareness. Mobile communication technologies showed promise within a web-based collaborative care program for type 2 diabetes. Future intervention design should focus on integrating easy-to-use applications within mobile technologies already familiar to patients and ensure the system allows for sufficient collaboration with a care provider.
The International Solid Earth Research Virtual Observatory
NASA Astrophysics Data System (ADS)
Fox, G.; Pierce, M.; Rundle, J.; Donnellan, A.; Parker, J.; Granat, R.; Lyzenga, G.; McLeod, D.; Grant, L.
2004-12-01
We describe the architecture and initial implementation of the International Solid Earth Research Virtual Observatory (iSERVO). This has been prototyped within the USA as SERVOGrid and expansion is planned to Australia, China, Japan and other countries. We base our design on a globally scalable distributed "cyber-infrastructure" or Grid built around a Web Services-based approach consistent with the extended Web Service Interoperability approach. The Solid Earth Science Working Group of NASA has identified several challenges for Earth Science research. In order to investigate these, we need to couple numerical simulation codes and data mining tools to observational data sets. This observational data are now available on-line in internet-accessible forms, and the quantity of this data is expected to grow explosively over the next decade. We architect iSERVO as a loosely federated Grid of Grids with each country involved supporting a national Solid Earth Research Grid. The national Grid Operations, possibly with dedicated control centers, are linked together to support iSERVO where an International Grid control center may eventually be necessary. We address the difficult multi-administrative domain security and ownership issues by exposing capabilities as services for which the risk of abuse is minimized. We support large scale simulations within a single domain using service-hosted tools (mesh generation, data repository and sensor access, GIS, visualization). Simulations typically involve sequential or parallel machines in a single domain supported by cross-continent services. We use Web Services implement Service Oriented Architecture (SOA) using WSDL for service description and SOAP for message formats. These are augmented by UDDI, WS-Security, WS-Notification/Eventing and WS-ReliableMessaging in the WS-I+ approach. Support for the latter two capabilities will be available over the next 6 months from the NaradaBrokering messaging system. We augment these specifications with the powerful portlet architecture using WSRP and JSR168 supported by such portal containers as uPortal, WebSphere, and Apache JetSpeed2. The latter portal aggregates component user interfaces for each iSERVO service allowing flexible customization of the user interface. We exploit the portlets produced by the NSF NMI (Middleware initiative) OGCE activity. iSERVO also uses specifications from the Open Geographical Information Systems (GIS) Consortium (OGC) that defines a number of standards for modeling earth surface feature data and services for interacting with this data. The data models are expressed in the XML-based Geography Markup Language (GML), and the OGC service framework are being adapted to use the Web Service model. The SERVO prototype includes a GIS Grid that currently includes the core WMS and WFS (Map and Feature) services. We will follow the best practice in the Grid and Web Service field and will adapt our technology as appropriate. For example, we expect to support services built on WS-RF when is finalized and to make use of the database interfaces OGSA-DAI and its WS-I+ versions. Finally, we review advances in Web Service scripting (such as HPSearch) and workflow systems (such as GCF) and their applications to iSERVO.
Chemozart: a web-based 3D molecular structure editor and visualizer platform.
Mohebifar, Mohamad; Sajadi, Fatemehsadat
2015-01-01
Chemozart is a 3D Molecule editor and visualizer built on top of native web components. It offers an easy to access service, user-friendly graphical interface and modular design. It is a client centric web application which communicates with the server via a representational state transfer style web service. Both client-side and server-side application are written in JavaScript. A combination of JavaScript and HTML is used to draw three-dimensional structures of molecules. With the help of WebGL, three-dimensional visualization tool is provided. Using CSS3 and HTML5, a user-friendly interface is composed. More than 30 packages are used to compose this application which adds enough flexibility to it to be extended. Molecule structures can be drawn on all types of platforms and is compatible with mobile devices. No installation is required in order to use this application and it can be accessed through the internet. This application can be extended on both server-side and client-side by implementing modules in JavaScript. Molecular compounds are drawn on the HTML5 Canvas element using WebGL context. Chemozart is a chemical platform which is powerful, flexible, and easy to access. It provides an online web-based tool used for chemical visualization along with result oriented optimization for cloud based API (application programming interface). JavaScript libraries which allow creation of web pages containing interactive three-dimensional molecular structures has also been made available. The application has been released under Apache 2 License and is available from the project website https://chemozart.com.
Implementing a distributed intranet-based information system.
O'Kane, K C; McColligan, E E; Davis, G A
1996-11-01
The article discusses Internet and intranet technologies and describes how to install an intranet-based information system using the Merle language facility and other readily available components. Merle is a script language designed to support decentralized medical record information retrieval applications on the World Wide Web. The goal of this work is to provide a script language tool to facilitate construction of efficient, fully functional, multipoint medical record information systems that can be accessed anywhere by low-cost Web browsers to search, retrieve, and analyze patient information. The language allows legacy MUMPS applications to function in a Web environment and to make use of the Web graphical, sound, and video presentation services. It also permits downloading of script applets for execution on client browsers, and it can be used in standalone mode with the Unix, Windows 95, Windows NT, and OS/2 operating systems.
NASA Astrophysics Data System (ADS)
Kirsch, Peter; Breen, Paul
2013-04-01
We wish to highlight outputs of a project conceived from a science requirement to improve discovery and access to Antarctic meteorological data in near real-time. Given that the data was distributed in both spatial and temporal domains and is to be accessed across several science disciplines, the creation of an interoperable, OGC compliant web service was deemed the most appropriate approach. We will demonstrate an implementation of the OGC SOS Interface Standard to discover, browse, and access Antarctic meteorological data-sets. A selection of programmatic (R, Perl) and web client interfaces utilizing open technologies ( e.g. jQuery, Flot, openLayers ) will be demonstrated. In addition we will show how high level abstractions can be constructed to allow the users flexible and straightforward access to SOS retrieved data.
Using Satellite Remote Sensing to Assist the National Weather Service (NWS) in Storm Damage Surveys
NASA Technical Reports Server (NTRS)
Schultz, Lori A.; Molthan, Andrew; McGrath, Kevin; Bell, Jordan; Cole, Tony; Burks, Jason
2016-01-01
In the United States, the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) is charged with performing damage assessments when storm or tornado damage is suspected after a severe weather event. This has led to the development of the Damage Assessment Toolkit (DAT), an application for smartphones, tablets and web browsers that allows for the collection, geolocation, and aggregation of various damage indicators collected during storm surveys.
Persistence and availability of Web services in computational biology.
Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar
2011-01-01
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.
Persistence and Availability of Web Services in Computational Biology
Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar
2011-01-01
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383
Exploring NASA OMI Level 2 Data With Visualization
NASA Technical Reports Server (NTRS)
Wei, Jennifer; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vicente, Gilberto
2014-01-01
Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms,... etc.). Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as "images", with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data/map sources.
Exploring NASA OMI Level 2 Data With Visualization
NASA Technical Reports Server (NTRS)
Wei, Jennifer C.; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vincente, Gilbert
2014-01-01
Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms, etc.).Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as images, with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data-map sources.
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
The EMBRACE web service collection
Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert
2010-01-01
The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862
Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny
2015-01-01
Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619
McDonald, Sandra A; Ryan, Benjamin J; Brink, Amy; Holtschlag, Victoria L
2012-02-01
Informatics systems, particularly those that provide capabilities for data storage, specimen tracking, retrieval, and order fulfillment, are critical to the success of biorepositories and other laboratories engaged in translational medical research. A crucial item-one easily overlooked-is an efficient way to receive and process investigator-initiated requests. A successful electronic ordering system should allow request processing in a maximally efficient manner, while also allowing streamlined tracking and mining of request data such as turnaround times and numerical categorizations (user groups, funding sources, protocols, and so on). Ideally, an electronic ordering system also facilitates the initial contact between the laboratory and customers, while still allowing for downstream communications and other steps toward scientific partnerships. We describe here the recently established Web-based ordering system for the biorepository at Washington University Medical Center, along with its benefits for workflow, tracking, and customer service. Because of the system's numerous value-added impacts, we think our experience can serve as a good model for other customer-focused biorepositories, especially those currently using manual or non-Web-based request systems. Our lessons learned also apply to the informatics developers who serve such biobanks.
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
IRIS Earthquake Browser with Integration to the GEON IDV for 3-D Visualization of Hypocenters.
NASA Astrophysics Data System (ADS)
Weertman, B. R.
2007-12-01
We present a new generation of web based earthquake query tool - the IRIS Earthquake Browser (IEB). The IEB combines the DMC's large set of earthquake catalogs (provided by USGS/NEIC, ISC and the ANF) with the popular Google Maps web interface. With the IEB you can quickly and easily find earthquakes in any region of the globe. Using Google's detailed satellite images, earthquakes can be easily co-located with natural geographic features such as volcanoes as well as man made features such as commercial mines. A set of controls allow earthquakes to be filtered by time, magnitude, and depth range as well as catalog name, contributor name and magnitude type. Displayed events can be easily exported in NetCDF format into the GEON Integrated Data Viewer (IDV) where hypocenters may be visualized in three dimensions. Looking "under the hood", the IEB is based on AJAX technology and utilizes REST style web services hosted at the IRIS DMC. The IEB is part of a broader effort at the DMC aimed at making our data holdings available via web services. The IEB is useful both educationally and as a research tool.
Protecting Students' Intellectual Property in the Web Plagiarism Detection Process
ERIC Educational Resources Information Center
Butakov, Sergey; Dyagilev, Vadim; Tskhay, Alexander
2012-01-01
Learning management systems (LMS) play a central role in communications in online and distance education. In the digital era, with all the information now accessible at students' fingertips, plagiarism detection services (PDS) have become a must-have part of LMS. Such integration provides a seamless experience for users, allowing PDS to check…
A Platform for Learning Internet of Things
ERIC Educational Resources Information Center
Bogdanovic, Zorica; Simic, Konstantin; Milutinovic, Miloš; Radenkovic, Božidar; Despotovic-Zrakic, Marijana
2014-01-01
This paper presents a model for conducting Internet of Things (IoT) classes based on a web-service oriented cloud platform. The goal of the designed model is to provide university students with knowledge about IoT concepts, possibilities, and business models, and allow them to develop basic system prototypes using general-purpose microdevices and…
InterMine Webservices for Phytozome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Joseph; Hayes, David; Goodstein, David
2014-01-10
A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data
VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory
NASA Astrophysics Data System (ADS)
Škoda, Petr; Hadrava, Petr; Fuchs, Jan
2012-04-01
VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.
Real-time GIS data model and sensor web service platform for environmental data management.
Gong, Jianya; Geng, Jing; Chen, Zeqiang
2015-01-09
Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.
Biowep: a workflow enactment portal for bioinformatics applications.
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-03-08
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.
Biowep: a workflow enactment portal for bioinformatics applications
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-01-01
Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563
Similarity Based Semantic Web Service Match
NASA Astrophysics Data System (ADS)
Peng, Hui; Niu, Wenjia; Huang, Ronghuai
Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.
GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality
NASA Astrophysics Data System (ADS)
Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo
2014-05-01
We present demonstrations of the GEO label Web services and their integration into a prototype extension of the GEOSS portal (http://scgeoviqua.sapienzaconsulting.com/web/guest/geo_home), the GMU portal (http://gis.csiss.gmu.edu/GADMFS/) and a GeoNetwork catalog application (http://uncertdata.aston.ac.uk:8080/geonetwork/srv/eng/main.home). The GEO label is designed to communicate, and facilitate interrogation of, geospatial quality information with a view to supporting efficient and effective dataset selection on the basis of quality, trustworthiness and fitness for use. The GEO label which we propose was developed and evaluated according to a user-centred design (UCD) approach in order to maximise the likelihood of user acceptance once deployed. The resulting label is dynamically generated from producer metadata in ISO or FDGC format, and incorporates user feedback on dataset usage, ratings and discovered issues, in order to supply a highly informative summary of metadata completeness and quality. The label was easily incorporated into a community portal as part of the GEO Architecture Implementation Programme (AIP-6) and has been successfully integrated into a prototype extension of the GEOSS portal, as well as the popular metadata catalog and editor, GeoNetwork. The design of the GEO label was based on 4 user studies conducted to: (1) elicit initial user requirements; (2) investigate initial user views on the concept of a GEO label and its potential role; (3) evaluate prototype label visualizations; and (4) evaluate and validate physical GEO label prototypes. The results of these studies indicated that users and producers support the concept of a label with drill-down interrogation facility, combining eight geospatial data informational aspects, namely: producer profile, producer comments, lineage information, standards compliance, quality information, user feedback, expert reviews, and citations information. These are delivered as eight facets of a wheel-like label, which are coloured according to metadata availability and are clickable to allow a user to engage with the original metadata and explore specific aspects in more detail. To support this graphical representation and allow for wider deployment architectures we have implemented two Web services, a PHP and a Java implementation, that generate GEO label representations by combining producer metadata (from standard catalogues or other published locations) with structured user feedback. Both services accept encoded URLs of publicly available metadata documents or metadata XML files as HTTP POST and GET requests and apply XPath and XSLT mappings to transform producer and feedback XML documents into clickable SVG GEO label representations. The label and services are underpinned by two XML-based quality models. The first is a producer model that extends ISO 19115 and 19157 to allow fuller citation of reference data, presentation of pixel- and dataset- level statistical quality information, and encoding of 'traceability' information on the lineage of an actual quality assessment. The second is a user quality model (realised as a feedback server and client) which allows reporting and query of ratings, usage reports, citations, comments and other domain knowledge. Both services are Open Source and are available on GitHub at https://github.com/lushv/geolabel-service and https://github.com/52North/GEO-label-java. The functionality of these services can be tested using our GEO label generation demos, available online at http://www.geolabel.net/demo.html and http://geoviqua.dev.52north.org/glbservice/index.jsf.
NASA Astrophysics Data System (ADS)
Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Harbeck, Daniel R.; Boroson, Todd; Liu, Wilson; Kotulla, Ralf; Shaw, Richard; Henschel, Robert; Rajagopal, Jayadev; Stobie, Elizabeth; Knezek, Patricia; Martin, R. Pierre; Archbold, Kevin
2014-07-01
The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatory's scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.
Boverhof's App Earns Honorable Mention in Amazon's Web Services
» Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday
Arnold, Roland; Goldenberg, Florian; Mewes, Hans-Werner; Rattei, Thomas
2014-01-01
The Similarity Matrix of Proteins (SIMAP, http://mips.gsf.de/simap/) database has been designed to massively accelerate computationally expensive protein sequence analysis tasks in bioinformatics. It provides pre-calculated sequence similarities interconnecting the entire known protein sequence universe, complemented by pre-calculated protein features and domains, similarity clusters and functional annotations. SIMAP covers all major public protein databases as well as many consistently re-annotated metagenomes from different repositories. As of September 2013, SIMAP contains >163 million proteins corresponding to ∼70 million non-redundant sequences. SIMAP uses the sensitive FASTA search heuristics, the Smith–Waterman alignment algorithm, the InterPro database of protein domain models and the BLAST2GO functional annotation algorithm. SIMAP assists biologists by facilitating the interactive exploration of the protein sequence universe. Web-Service and DAS interfaces allow connecting SIMAP with any other bioinformatic tool and resource. All-against-all protein sequence similarity matrices of project-specific protein collections are generated on request. Recent improvements allow SIMAP to cover the rapidly growing sequenced protein sequence universe. New Web-Service interfaces enhance the connectivity of SIMAP. Novel tools for interactive extraction of protein similarity networks have been added. Open access to SIMAP is provided through the web portal; the portal also contains instructions and links for software access and flat file downloads. PMID:24165881
Biological Web Service Repositories Review
Urdidiales‐Nieto, David; Navas‐Delgado, Ismael
2016-01-01
Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Development of web tools to disseminate space geodesy data-related products
NASA Astrophysics Data System (ADS)
Soudarin, L.; Ferrage, P.; Mezerette, A.
2014-12-01
In order to promote the products of the DORIS system, the French Space Agency CNES has developed and implemented on the web site of the International DORIS Service (IDS) a set of plot tools to interactively build and display time series of site positions, orbit residuals and terrestrial parameters (scale, geocenter). An interactive global map is also available to select sites, and to get access to their information. Besides the products provided by the CNES Orbitography Team and the IDS components, these tools allow comparing time evolutions of coordinates for collocated DORIS and GNSS stations, thanks to the collaboration with the Terrestrial Frame Combination Center of the International GNSS Service (IGS). The next step currently in progress is the creation of a database to improve robustness and efficiency of the tools, with the objective to propose a complete web service to foster data exchange with the other geodetic services of the International Association of Geodesy (IAG). The possibility to visualize and compare position time series of the four main space geodetic techniques DORIS, GNSS, SLR and VLBI is already under way at the French level. A dedicated version of these web tools has been developed for the French Space Geodesy Research Group (GRGS). It will give access to position time series provided by the GRGS Analysis Centers involved in DORIS, GNSS, SLR and VLBI data processing for the realization of the International Terrestrial Reference Frame. In this presentation, we will describe the functionalities of these tools, and we will address some aspects of the time series (content, format).
A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.
2012-12-01
The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate pieces of this system progress, they will be combined and generalized to form a sort of social network for nationally consistent hydrologic modeling.
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
NASA Astrophysics Data System (ADS)
Robinson, E.; Meyer, C. B.; Benedict, K. K.
2013-12-01
A critical part of effective Earth science data and information system interoperability involves collaboration across geographically and temporally distributed communities. The Federation of Earth Science Information Partners (ESIP) is a broad-based, distributed community of science, data and information technology practitioners from across science domains, economic sectors and the data lifecycle. ESIP's open, participatory structure provides a melting pot for coordinating around common areas of interest, experimenting on innovative ideas and capturing and finding best practices and lessons learned from across the network. Since much of ESIP's work is distributed, the Foundation for Earth Science was established as a non-profit home for its supportive collaboration infrastructure. The infrastructure leverages the Internet and recent advances in collaboration web services. ESIP provides neutral space for self-governed groups to emerge around common Earth science data and information issues, ebbing and flowing as the need for them arises. As a group emerges, the Foundation quickly equips the virtual workgroup with a set of ';commodity services'. These services include: web meeting technology (Webex), a wiki and an email listserv. WebEx allows the group to work synchronously, dynamically viewing and discussing shared information in real time. The wiki is the group's primary workspace and over time creates organizational memory. The listserv provides an inclusive way to email the group and archive all messages for future reference. These three services lower the startup barrier for collaboration and enable automatic content preservation to allow for future work. While many of ESIP's consensus-building activities are discussion-based, the Foundation supports an ESIP testbed environment for exploring and evaluating prototype standards, services, protocols, and best practices. After community review of testbed proposals, the Foundation provides small seed funding and a toolbox of collaborative development resources including Amazon Web Services to quickly spin-up the testbed instance and a GitHub account for maintaining testbed project code enabling reuse. Recently, the Foundation supported development of the ESIP Commons (http://commons.esipfed.org), a Drupal-based knowledge repository for non-traditional publications to preserve community products and outcomes like white papers, posters and proceedings. The ESIP Commons adds additional structured metadata, provides attribution to contributors and allows those unfamiliar with ESIP a straightforward way to find information. The success of ESIP Federation activities is difficult to measure. The ESIP Commons is a step toward quantifying sponsor return on investment and is one dataset used in network map analysis of the ESIP community network, another success metric. Over the last 15 years, ESIP has continually grown and attracted experts in the Earth science data and informatics field becoming a primary locus of research and development on the application and evolution of Earth science data standards and conventions. As funding agencies push toward a more collaborative approach, the lessons learned from ESIP and the collaboration services themselves are a crucial component of supporting science research.
Development of a Virtual Multidisciplinary Lung Cancer Tumor Board in a Community Setting
Stevenson, Marvaretta M.; Irwin, Tonia; Lowry, Terry; Ahmed, Maleka Z.; Walden, Thomas L.; Watson, Melanie; Sutton, Linda
2013-01-01
Purpose: Creating an effective platform for multidisciplinary tumor conferences can be challenging in the rural community setting. The Duke Cancer Network created an Internet-based platform for a multidisciplinary conference to enhance the care of patients with lung cancer. This conference incorporates providers from different physical locations within a rural community and affiliated providers from a university-based cancer center 2 hours away. An electronic Web conferencing tool connects providers aurally and visually. Methods: Conferences were set up using a commercially available Web conferencing platform. The video platform provides a secure Web site coupled with a secure teleconference platform to ensure patient confidentiality. Multiple disciplines are invited to participate, including radiology, radiation oncology, thoracic surgery, pathology, and medical oncology. Participants only need telephone access and Internet connection to participate. Results: Patient histories and physicals are presented, and the Web conferencing platform allows radiologic and histologic images to be reviewed. Treatment plans for patients are discussed, allowing providers to coordinate care among the different subspecialties. Patients who need referral to the affiliated university-based cancer center for specialized services are identified. Pertinent treatment guidelines and journal articles are reviewed. On average, there are 10 participants with one to two cases presented per session. Conclusion: The use of a Web conferencing platform allows subspecialty providers throughout the community and hours away to discuss lung cancer patient cases. This platform increases convenience for providers, eliminating travel to a central location. Coordination of care for patients requiring multidisciplinary care is facilitated, shortening evaluation time before definitive treatment plan. PMID:23942505
Putting FLEXPART to REST: The Provision of Atmospheric Transport Modeling Services
NASA Astrophysics Data System (ADS)
Morton, Don; Arnold, Dèlia
2015-04-01
We are developing a RESTful set of modeling services for the FLEXPART modeling system. FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. It has been used to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides and may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced or casual user. Well-designed modeling services lower the entry barrier for scientists to perform simulations, allowing them to create and execute their models from a variety of devices and programming environments. This world of Service Oriented Architectures (SOA) has progressed to a REpresentational State Transfer (REST) paradigm, in which the pervasive and mature HTTP environment is used as a foundation for providing access to model services. With such an approach, sound software engineering practises are adhered to in order to deploy service modules exhibiting very loose coupling with the clients. In short, services are accessed and controlled through the formation of properly-constructed Uniform Resource Identifiers (URI's), processed in an HTTP environment. In this way, any client or combination of clients - whether a bash script, Python program, web GUI, or even Unix command line - that can interact with an HTTP server, can run the modeling environment. This loose coupling allows for the deployment of a variety of front ends, all accessing a common modeling backend system. Furthermore, it is generally accepted in the cloud computing community that RESTful approaches constitute a sound approach towards successful deployment of services. Through the design of a RESTful, cloud-based modeling system, we provide the ubiquitous access to FLEXPART that allows scientists to focus on modeling processes instead of tedious computational details. In this work, we describe the modeling services environment, and provide examples of access via command-line, Python programs, and web GUI interfaces.
The EarthServer Geology Service: web coverage services for geosciences
NASA Astrophysics Data System (ADS)
Laxton, John; Sen, Marcus; Passmore, James
2014-05-01
The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed imagery and 3D models, the geology service is also delivering DTM coverages which can be viewed in the 3D client in conjunction with both imagery and models. The service is accessible through a web GUI which allows the imagery to be viewed against a range of background maps and DTMs, and in the 3D client; spatial selection to be carried out graphically; the results of image enhancement to be displayed; and selected data to be downloaded. The GUI also provides access to the Glasgow model in the 3D client, as well as tutorial material. In the final year of the project it is intended to increase the volume of data to 20Tb and enhance the WCPS processing, including depth and thickness querying of 3D models. We have also investigated the use of GeoSciML, developed to describe and interchange the information on geological maps, to describe model surface coverages. EarthServer is developing a combined WCPS and xQuery query language, and we will investigate applying this to the GeoSciML described surfaces to answer questions such as 'find all units with a predominant sand lithology within 25m of the surface'.
Web service discovery among large service pools utilising semantic similarity and clustering
NASA Astrophysics Data System (ADS)
Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli
2017-03-01
With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.
A verification strategy for web services composition using enhanced stacked automata model.
Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali
2015-01-01
Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.
Pragmatic Computing - A Semiotic Perspective to Web Services
NASA Astrophysics Data System (ADS)
Liu, Kecheng
The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.
Development of a web-based intervention for the indicated prevention of depression.
Kelders, Saskia M; Pots, Wendy T M; Oskam, Maarten Jan; Bohlmeijer, Ernst T; van Gemert-Pijnen, Julia E W C
2013-02-20
To reduce the large public health burden of the high prevalence of depression, preventive interventions targeted at people at risk are essential and can be cost-effective. Web-based interventions are able to provide this care, but there is no agreement on how to best develop these applications and often the technology is seen as a given. This seems to be one of the main reasons that web-based interventions do not reach their full potential. The current study describes the development of a web-based intervention for the indicated prevention of depression, employing the CeHRes (Center for eHealth Research and Disease Management) roadmap. The goals are to create a user-friendly application which fits the values of the stakeholders and to evaluate the process of development. The employed methods are a literature scan and discussion in the contextual inquiry; interviews, rapid prototyping and a requirement session in the value specification stage; and user-based usability evaluation, expert-based usability inspection and a requirement session in the design stage. The contextual inquiry indicated that there is a need for easily accessible interventions for the indicated prevention of depression and web-based interventions are seen as potentially meeting this need. The value specification stage yielded expected needs of potential participants, comments on the usefulness of the proposed features and comments on two proposed designs of the web-based intervention. The design stage yielded valuable comments on the system, content and service of the web-based intervention. Overall, we found that by developing the technology, we successfully (re)designed the system, content and service of the web-based intervention to match the values of stakeholders. This study has shown the importance of a structured development process of a web-based intervention for the indicated prevention of depression because: (1) it allows the development team to clarify the needs that have to be met for the intervention to be of use to the target audience; and (2) it yields feedback on the design of the application that is broader than color and buttons, but encompasses comments on the quality of the service that the application offers.
Development of a web-based intervention for the indicated prevention of depression
2013-01-01
Background To reduce the large public health burden of the high prevalence of depression, preventive interventions targeted at people at risk are essential and can be cost-effective. Web-based interventions are able to provide this care, but there is no agreement on how to best develop these applications and often the technology is seen as a given. This seems to be one of the main reasons that web-based interventions do not reach their full potential. The current study describes the development of a web-based intervention for the indicated prevention of depression, employing the CeHRes (Center for eHealth Research and Disease Management) roadmap. The goals are to create a user-friendly application which fits the values of the stakeholders and to evaluate the process of development. Methods The employed methods are a literature scan and discussion in the contextual inquiry; interviews, rapid prototyping and a requirement session in the value specification stage; and user-based usability evaluation, expert-based usability inspection and a requirement session in the design stage. Results The contextual inquiry indicated that there is a need for easily accessible interventions for the indicated prevention of depression and web-based interventions are seen as potentially meeting this need. The value specification stage yielded expected needs of potential participants, comments on the usefulness of the proposed features and comments on two proposed designs of the web-based intervention. The design stage yielded valuable comments on the system, content and service of the web-based intervention. Conclusions Overall, we found that by developing the technology, we successfully (re)designed the system, content and service of the web-based intervention to match the values of stakeholders. This study has shown the importance of a structured development process of a web-based intervention for the indicated prevention of depression because: (1) it allows the development team to clarify the needs that have to be met for the intervention to be of use to the target audience; and (2) it yields feedback on the design of the application that is broader than color and buttons, but encompasses comments on the quality of the service that the application offers. PMID:23425322
Ontology-based, multi-agent support of production management
NASA Astrophysics Data System (ADS)
Meridou, Despina T.; Inden, Udo; Rückemann, Claus-Peter; Patrikakis, Charalampos Z.; Kaklamani, Dimitra-Theodora I.; Venieris, Iakovos S.
2016-06-01
Over the recent years, the reported incidents on failed aircraft ramp-ups or the delayed production in small-lots have increased substantially. In this paper, we present a production management platform that combines agent-based techniques with the Service Oriented Architecture paradigm. This platform takes advantage of the functionality offered by the semantic web language OWL, which allows the users and services of the platform to speak a common language and, at the same time, facilitates risk management and decision making.
2017-01-01
Background The decision around whether to attend breast cancer screening can often involve making sense of confusing and contradictory information on its risks and benefits. The Word of Mouth Mammogram e-Network (WoMMeN) project was established to create a Web-based resource to support decision making regarding breast cancer screening. This paper presents data from our user-centered approach in engaging stakeholders (both health professionals and service users) in the design of this Web-based resource. Our novel approach involved creating a user design group within Facebook to allow them access to ongoing discussion between researchers, radiographers, and existing and potential service users. Objective This study had two objectives. The first was to examine the utility of an online user design group for generating insight for the creation of Web-based health resources. We sought to explore the advantages and limitations of this approach. The second objective was to analyze what women want from a Web-based resource for breast cancer screening. Methods We recruited a user design group on Facebook and conducted a survey within the group, asking questions about design considerations for a Web-based breast cancer screening hub. Although the membership of the Facebook group varied over time, there were 71 members in the Facebook group at the end point of analysis. We next conducted a framework analysis on 70 threads from Facebook and a thematic analysis on the 23 survey responses. We focused additionally on how the themes were discussed by the different stakeholders within the context of the design group. Results Two major themes were found across both the Facebook discussion and the survey data: (1) the power of information and (2) the hub as a place for communication and support. Information was considered as empowering but also recognized as threatening. Communication and the sharing of experiences were deemed important, but there was also recognition of potential miscommunication within online discussion. Health professionals and service users expressed the same broad concerns, but there were subtle differences in their opinions. Importantly, the themes were triangulated between the Facebook discussions and the survey data, supporting the validity of an online user design group. Conclusions Online user design groups afford a useful method for understanding stakeholder needs. In contrast to focus groups, they afford access to users from diverse geographical locations and traverse time constraints, allowing more follow-ups to responses. The use of Facebook provides a familiar and naturalistic setting for discussion. Although we acknowledge the limitations in the sample, this approach has allowed us to understand the views of stakeholders in the user-centered design of the WoMMeN hub for breast cancer screening. PMID:29079555
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context of OGC standardization. The author is co-chair of OGC's WCS Working Group (WG) and Coverages WG.
Web-based healthcare hand drawing management system.
Hsieh, Sheau-Ling; Weng, Yung-Ching; Chen, Chi-Huang; Hsu, Kai-Ping; Lin, Jeng-Wei; Lai, Feipei
2010-01-01
The paper addresses Medical Hand Drawing Management System architecture and implementation. In the system, we developed four modules: hand drawing management module; patient medical records query module; hand drawing editing and upload module; hand drawing query module. The system adapts windows-based applications and encompasses web pages by ASP.NET hosting mechanism under web services platforms. The hand drawings implemented as files are stored in a FTP server. The file names with associated data, e.g. patient identification, drawing physician, access rights, etc. are reposited in a database. The modules can be conveniently embedded, integrated into any system. Therefore, the system possesses the hand drawing features to support daily medical operations, effectively improve healthcare qualities as well. Moreover, the system includes the printing capability to achieve a complete, computerized medical document process. In summary, the system allows web-based applications to facilitate the graphic processes for healthcare operations.
The Design of Modular Web-Based Collaboration
NASA Astrophysics Data System (ADS)
Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee
Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.
QoS measurement of workflow-based web service compositions using Colored Petri net.
Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra
2014-01-01
Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.
2013-01-01
Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur
2013-03-01
Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.
Multigraph: Reusable Interactive Data Graphs
NASA Astrophysics Data System (ADS)
Phillips, M. B.
2010-12-01
There are surprisingly few good software tools available for presenting time series data on the internet. The most common practice is to use a desktop program such as Excel or Matlab to save a graph as an image which can be included in a web page like any other image. This disconnects the graph from the data in a way that makes updating a graph with new data a cumbersome manual process, and it limits the user to one particular view of the data. The Multigraph project defines an XML format for describing interactive data graphs, and software tools for creating and rendering those graphs in web pages and other internet connected applications. Viewing a Multigraph graph is extremely simple and intuitive, and requires no instructions; the user can pan and zoom by clicking and dragging, in a familiar "Google Maps" kind of way. Creating a new graph for inclusion in a web page involves writing a simple XML configuration file. Multigraph can read data in a variety of formats, and can display data from a web service, allowing users to "surf" through large data sets, downloading only those the parts of the data that are needed for display. The Multigraph XML format, or "MUGL" for short, provides a concise description of the visual properties of a graph, such as axes, plot styles, data sources, labels, etc, as well as interactivity properties such as how and whether the user can pan or zoom along each axis. Multigraph reads a file in this format, draws the described graph, and allows the user to interact with it. Multigraph software currently includes a Flash application for embedding graphs in web pages, a Flex component for embedding graphs in larger Flex/Flash applications, and a plugin for creating graphs in the WordPress content management system. Plans for the future include a Java version for desktop viewing and editing, a command line version for batch and server side rendering, and possibly Android and iPhone versions. Multigraph is currently in use on several web sites including the US Drought Portal (www.drought.gov), the NOAA Climate Services Portal (www.climate.gov), the Climate Reference Network (www.ncdc.noaa.gov/crn), NCDC's State of the Climate Report (www.ncdc.noaa.gov/sotc), and the US Forest Service's Forest Change Assessment Viewer (ews.forestthreats.org/NPDE/NPDE.html). More information about Multigraph is available from the web site www.multigraph.org. Interactive Multigraph Display of Real Time Weather Data
NASA Astrophysics Data System (ADS)
Qin, Rufu; Lin, Liangzhao
2017-06-01
Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.
EnviroAtlas -Pittsburgh, PA- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas).The EnviroAtlas Pittsburgh, PA land cover map was generated from United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1 m spatial resolution. Imagery was collected on multiple dates in June 2010. Five land cover classes were mapped: water, impervious surfaces, soil and barren land, trees and forest, and grass and herbaceous non-woody vegetation. An accuracy assessment of 500 completely random and 81 stratified random points yielded an overall accuracy of 86.57 percent. The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Pittsburgh, PA. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas -- Austin, TX -- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The Austin, TX EnviroAtlas One Meter-scale Urban Land Cover (MULC) Data were generated from United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1 m spatial resolution from multiple dates in May, 2010. Six land cover classes were mapped: water, impervious surfaces, soil and barren land, trees, grass-herbaceous non-woody vegetation, and agriculture. An accuracy assessment of 600 completely random and 55 stratified random photo interpreted reference points yielded an overall User's fuzzy accuracy of 87 percent. The area mapped is the US Census Bureau's 2010 Urban Statistical Area for Austin, TX plus a 1 km buffer. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas
Advances on Sensor Web for Internet of Things
NASA Astrophysics Data System (ADS)
Liang, S.; Bermudez, L. E.; Huang, C.; Jazayeri, M.; Khalafbeigi, T.
2013-12-01
'In much the same way that HTML and HTTP enabled WWW, the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE), envisioned in 2001 [1] will allow sensor webs to become a reality.'. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not a simple task. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. SWE standardizes web service interfaces, sensor descriptions and data encodings as building blocks for a Sensor Web. SWE standards are now mature specifications (version 2.0) with approved OGC compliance test suites and tens of independent implementations. Many earth and space science organizations and government agencies are using the SWE standards to publish and share their sensors and observations. While SWE has been demonstrated very effective for scientific sensors, its complexity and the computational overhead may not be suitable for resource-constrained tiny sensors. In June 2012, a new OGC Standards Working Group (SWG) was formed called the Sensor Web Interface for Internet of Things (SWE-IoT) SWG. This SWG focuses on developing one or more OGC standards for resource-constrained sensors and actuators (e.g., Internet of Things devices) while leveraging the existing OGC SWE standards. In the near future, billions to trillions of small sensors and actuators will be embedded in real- world objects and connected to the Internet facilitating a concept called the Internet of Things (IoT). By populating our environment with real-world sensor-based devices, the IoT is opening the door to exciting possibilities for a variety of application domains, such as environmental monitoring, transportation and logistics, urban informatics, smart cities, as well as personal and social applications. The current SWE-IoT development aims on modeling the IoT components and defining a standard web service that makes the observations captured by IoT devices easily accessible and allows users to task the actuators on the IoT devices. The SWE IoT model links things with sensors and reuses the OGC Observation and Model (O&M) to link sensors with features of interest and observed properties Unlike most SWE standards, the SWE-IoT defines a RESTful web interface for users to perform CRUD (i.e., create, read, update, and delete) functions on resources, including Things, Sensors, Actuators, Observations, Tasks, etc. Inspired by the OASIS Open Data Protocol (OData), the SWE-IoT web service provides the multi-faceted query, which means that users can query from different entity collections and link from one entity to other related entities. This presentation will introduce the latest development of the OGC SWE-IoT standards. Potential applications and implications in Earth and Space science will also be discussed. [1] Mike Botts, Sensor Web Enablement White Paper, Open GIS Consortium, Inc. 2002
OGC and Grid Interoperability in enviroGRIDS Project
NASA Astrophysics Data System (ADS)
Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas
2010-05-01
EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/
User Needs of Digital Service Web Portals: A Case Study
ERIC Educational Resources Information Center
Heo, Misook; Song, Jung-Sook; Seol, Moon-Won
2013-01-01
The authors examined the needs of digital information service web portal users. More specifically, the needs of Korean cultural portal users were examined as a case study. The conceptual framework of a web-based portal is that it is a complex, web-based service application with characteristics of information systems and service agents. In…
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Osz, Ágnes; Pongor, Lorinc Sándor; Szirmai, Danuta; Gyorffy, Balázs
2017-12-08
The long-term availability of online Web services is of utmost importance to ensure reproducibility of analytical results. However, because of lack of maintenance following acceptance, many servers become unavailable after a short period of time. Our aim was to monitor the accessibility and the decay rate of published Web services as well as to determine the factors underlying trends changes. We searched PubMed to identify publications containing Web server-related terms published between 1994 and 2017. Automatic and manual screening was used to check the status of each Web service. Kruskall-Wallis, Mann-Whitney and Chi-square tests were used to evaluate various parameters, including availability, accessibility, platform, origin of authors, citation, journal impact factor and publication year. We identified 3649 publications in 375 journals of which 2522 (69%) were currently active. Over 95% of sites were running in the first 2 years, but this rate dropped to 84% in the third year and gradually sank afterwards (P < 1e-16). The mean half-life of Web services is 10.39 years. Working Web services were published in journals with higher impact factors (P = 4.8e-04). Services published before the year 2000 received minimal attention. The citation of offline services was less than for those online (P = 0.022). The majority of Web services provide analytical tools, and the proportion of databases is slowly decreasing. Conclusions. Almost one-third of Web services published to date went out of service. We recommend continued support of Web-based services to increase the reproducibility of published results. © The Author 2017. Published by Oxford University Press.
Velonakis, E; Mantas, J; Mavrikakis, I
2006-01-01
The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.
A Flexible Monitoring Infrastructure for the Simulation Requests
NASA Astrophysics Data System (ADS)
Spinoso, V.; Missiato, M.
2014-06-01
Running and monitoring simulations usually involves several different aspects of the entire workflow: the configuration of the job, the site issues, the software deployment at the site, the file catalogue, the transfers of the simulated data. In addition, the final product of the simulation is often the result of several sequential steps. This project tries a different approach to monitoring the simulation requests. All the necessary data are collected from the central services which lead the submission of the requests and the data management, and stored by a backend into a NoSQL-based data cache; those data can be queried through a Web Service interface, which returns JSON responses, and allows users, sites, physics groups to easily create their own web frontend, aggregating only the needed information. As an example, it will be shown how it is possible to monitor the CMS services (ReqMgr, DAS/DBS, PhEDEx) using a central backend and multiple customized cross-language frontends.
Data Integration Using SOAP in the VSO
NASA Astrophysics Data System (ADS)
Tian, K. Q.; Bogart, R. S.; Davey, A.; Dimitoglou, G.; Gurman, J. B.; Hill, F.; Martens, P. C.; Wampler, S.
2003-05-01
The Virtual Solar Observatory (VSO) project has implemented a time interval search for all four participating data archives. The back-end query services are implemented as web services, and are accessible via SOAP. SOAP (Simple Object Access Protocol) defines an RPC (Remote Procedure Call) mechanism that employs HTTP as its transport and encodes the client-server interactions (request and response messages) in XML (eXtensible Markup Language) documents. In addition to its core function of identifying relevant datasets in the local archive, the SOAP server at each data provider acts as a "wrapper" that maps descriptions in an abstract data model to those in the provider-specific data model, and vice versa. It is in this way that VSO integrates heterogeneous data services and allows access to them using a common interface. Our experience with SOAP has been fruitful. It has proven to be a better alternative to traditional web access methods, namely POST and GET, because of its flexibility and interoperability.
Benefits and Challenges of Architecture Frameworks
2011-06-01
systems and identify emerging and obsolete standards. • The NATO Capability View ( NCV ) serves the analysis and optimization of military ca- pabilities... NCVs show the dependencies between different capabilities and allow detecting gaps and overlaps of capabilities. NCVs deliver indirectly requirements...Email (possibly with vendor-specific extensions/modifications) • Proprietary, and possibly not well-documented, message formats • Web services
Developing Swedish Spelling Exercises on the ICALL Platform Lärka
ERIC Educational Resources Information Center
Pijetlovic, Dijana; Volodina, Elena
2013-01-01
In this project we developed web services on the ICALL platform Lärka for automatic generation of Swedish spelling exercises using Text-To-Speech (TTS) technology which allows L2 learners to train their spelling and listening individually at home. The spelling exercises contain five different linguistic levels, whereby the language learner has the…
Biological Web Service Repositories Review.
Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F
2017-05-01
Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
The value of Web-based library services at Cedars-Sinai Health System.
Halub, L P
1999-07-01
Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.
The value of Web-based library services at Cedars-Sinai Health System.
Halub, L P
1999-01-01
Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services. PMID:10427423
Van Hoecke, Sofie; Steurbaut, Kristof; Taveirne, Kristof; De Turck, Filip; Dhoedt, Bart
2010-01-01
We designed a broker platform for e-homecare services using web service technology. The broker allows efficient data communication and guarantees quality requirements such as security, availability and cost-efficiency by dynamic selection of services, minimizing user interactions and simplifying authentication through a single user sign-on. A prototype was implemented, with several e-homecare services (alarm, telemonitoring, audio diary and video-chat). It was evaluated by patients with diabetes and multiple sclerosis. The patients found that the start-up time and overhead imposed by the platform was satisfactory. Having all e-homecare services integrated into a single application, which required only one login, resulted in a high quality of experience for the patients.
MedlinePlus Connect: How it Works
... it looks depends on how it is implemented. Web Application The Web application returns a formatted response ... for more examples of Web Application response pages. Web Service The MedlinePlus Connect REST-based Web service ...
NASA Astrophysics Data System (ADS)
Iwatsuki, Masami; Kato, Yoriyuki; Yonekawa, Akira
State-of-the-art Internet technologies allow us to provide advanced and interactive distance education services. However, we could not help but gather students for experiments and exercises in an education for engineering because large-scale equipments and expensive software are required. On the other hand, teleoperation systems with robot manipulator or vehicle via Internet have been developed in the field of robotics. By fusing these two techniques, we can realize remote experiment and exercise systems for the engineering education based on World Wide Web. This paper presents how to construct the remote environment that allows students to take courses on experiment and exercise independently of their locations. By using the proposed system, users can exercise and practice remotely about control of a manipulator and a robot vehicle and programming of image processing.
OpenSearch technology for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo
2010-05-01
In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above mentioned syndication formats; the mapping processing, is bidirectional. When GI-cat is used to access OpenSearch compliant services, the CSW query must be mapped to the OpenSearch query, and the response elements, must be translated according to the GI-cat internal data model. As results of such extensions, GI-cat provides a user friendly facade to the complex CSW interface, thus enabling it to be queried, for example, using a browser toolbar.
A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web
de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández
2014-01-01
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678
A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.
de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso
2014-06-18
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.
Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.
Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E
2014-01-01
The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. © RSNA, 2014.
Unifying Access to National Hydrologic Data Repositories via Web Services
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.
2006-12-01
The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of WaterOneFlow web services is available. More information about the ongoing development of CUAHSI HIS is available from http://www.cuahsi.org/his/.
A Privacy Access Control Framework for Web Services Collaboration with Role Mechanisms
NASA Astrophysics Data System (ADS)
Liu, Linyuan; Huang, Zhiqiu; Zhu, Haibin
With the popularity of Internet technology, web services are becoming the most promising paradigm for distributed computing. This increased use of web services has meant that more and more personal information of consumers is being shared with web service providers, leading to the need to guarantee the privacy of consumers. This paper proposes a role-based privacy access control framework for Web services collaboration, it utilizes roles to specify the privacy privileges of services, and considers the impact on the reputation degree of the historic experience of services in playing roles. Comparing to the traditional privacy access control approaches, this framework can make the fine-grained authorization decision, thus efficiently protecting consumers' privacy.
New GES DISC Services Shortening the Path in Science Data Discovery
NASA Technical Reports Server (NTRS)
Li, Angela; Shie, Chung-Lin; Petrenko, Maksym; Hegde, Mahabaleshwa; Teng, William; Liu, Zhong; Bryant, Keith; Shen, Suhung; Hearty, Thomas; Wei, Jennifer;
2017-01-01
The Current GES DISC available services only allow user to select variables from a single dataset at a time and too many variables from a dataset are displayed, choice is hard. At American Geophysical Union (AGU) 2016 Fall Meeting, Goddard Earth Sciences Data Information Services Center (GES DISC) unveiled a new service: Datalist. A Datalist is a collection of predefined or user-defined data variables from one or more archived datasets. Our science support team curated predefined datalist and provided value to the user community. Imagine some novice user wants to study hurricane and typed in hurricane in the search box. The first item in the search result is GES DISC provided Hurricane Datalist. It contains scientists recommended variables from multiple datasets like TRMM, GPM, MERRA, etc. Datalist uses the same architecture as that of our new website, which also provides one-stop shopping for data, metadata, citation, documentation, visualization and other available services.We implemented Datalist with new GES DISC web architecture, one single web page that unified all user interfaces. From that webpage, users can find data by either type in keyword, or browse by category. It also provides user with a sophisticated integrated data and services package, including metadata, citation, documentation, visualization, and data-specific services, all available from one-stop shopping.
NASA Astrophysics Data System (ADS)
Davis, L.; Weatherley, J.; Bhushan, S.; Khan, H.; de La Chica, S.; Deardorff, R.
2004-12-01
An exciting pilot program took place this summer, pioneering the development of Digital Library for Earth System Education (DLESE) Teaching Boxes with the Univ. of CA. Berkeley Museum of Paleontology, SF State Univ., USGS and 7 middle/high school teachers from the San Francisco area. This session will share the DLESE Teaching Box concept, explain the pilot program, and explore the tremendous opportunities for expanding this notion to embrace interdisciplinary approaches to learning about the Earth in the undergraduate science and pre-service teaching arenas. A Teaching Box is a metaphor for an online assembly of interrelated learning concepts, digital resources, and cohesive narration that bridges the gap between discrete resources and understanding. Within a Teaching Box, an instructor or student can pick a topic and see the concepts that build an understanding of that topic, explore online resources that support learning of those concepts, and benefit from the narration (the glue) that weaves concepts, activities, and background information together into a complete teaching/learning story. In this session, we will demonstrate the emerging Teaching Box prototypes and explore how this platform may promote STEM learning by utilizing DLESE tools and services in ways that begin to blur traditional disciplinary boundaries, overcome limitations of discipline-specific vocabularies, and foster collaboration. We will show ways in which new DLESE Web Services could support learning in this highly contextualized environment. We will see glimpses of how learners and educators will be able to modify or create their own Teaching Boxes specific to a unit of study or course, and perhaps share them with the Earth Science Education community. We will see ways to stay abreast of current Earth events, emerging research, and real-time data and incorporate such dynamic information into one learning environment. Services will be described and demonstrated in the context of Teaching Boxes: - DLESE Web Services provide a programmatic interface that allows the Teaching Box (or any web page) to have the same DLESE search, bookmarking features, and data management that are found at the DLESE web site. - DLESE Smart Links are hyperlinks that can be created by anyone and implemented as easily as defining a specific query. Clicking a Smart Link displays a list of resources that corresponds to the specific query. We'll talk about how this service can help to bridge the gap between vocabularies and disciplines and the interesting possibilities it presents for contextualizing searches and building custom topical menus. - The Really Simple Syndication (RSS) service delivers online information immediately, and allows end-users to subscribe to receive regular news, events, and data on a given Teaching Box topic. This opens the door to event-based learning. - Strand Maps, developed by the AAAS, are diagrams of interconnected learning concepts across a range of science, technology, engineering, and mathematics disciplines. The University of Colorado and its project partners are developing the Strand Map Service (SMS) to provide an interactive interface to interrelated learning goals, content knowledge, (including student misconceptions) and educational resources in the National Science Digital Library and DLESE.
SIDECACHE: Information access, management and dissemination framework for web services.
Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A
2011-06-14
Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; M., M.
2016-06-01
Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.
Proteus - A Free and Open Source Sensor Observation Service (SOS) Client
NASA Astrophysics Data System (ADS)
Henriksson, J.; Satapathy, G.; Bermudez, L. E.
2013-12-01
The Earth's 'electronic skin' is becoming ever more sophisticated with a growing number of sensors measuring everything from seawater salinity levels to atmospheric pressure. To further the scientific application of this data collection effort, it is important to make the data easily available to anyone who wants to use it. Making Earth Science data readily available will allow the data to be used in new and potentially groundbreaking ways. The US National Science and Technology Council made this clear in its most recent National Strategy for Civil Earth Observations report, when it remarked that Earth observations 'are often found to be useful for additional purposes not foreseen during the development of the observation system'. On the road to this goal the Open Geospatial Consortium (OGC) is defining uniform data formats and service interfaces to facilitate the discovery and access of sensor data. This is being done through the Sensor Web Enablement (SWE) stack of standards, which include the Sensor Observation Service (SOS), Sensor Model Language (SensorML), Observations & Measurements (O&M) and Catalog Service for the Web (CSW). End-users do not have to use these standards directly, but can use smart tools that leverage and implement them. We have developed such a tool named Proteus. Proteus is an open-source sensor data discovery client. The goal of Proteus is to be a general-purpose client that can be used by anyone for discovering and accessing sensor data via OGC-based services. Proteus is a desktop client and supports a straightforward workflow for finding sensor data. The workflow takes the user through the process of selecting appropriate services, bounding boxes, observed properties, time periods and other search facets. NASA World Wind is used to display the matching sensor offerings on a map. Data from any sensor offering can be previewed in a time series. The user can download data from a single sensor offering, or download data in bulk from all matching sensor offerings. Proteus leverages NASA World Wind's WMS capabilities and allow overlaying sensor offerings on top of any map. Specific search criteria (i.e. user discoveries) can be saved and later restored. Proteus is supports two user types: 1) the researcher/scientist interested in discovering and downloading specific sensor data as input to research processes, and 2) the data manager responsible for maintaining sensor data services (e.g. SOSs) and wants to ensure proper data and metadata delivery, verify sensor data, and receive sensor data alerts. Proteus has a Web-based companion product named the Community Hub that is used to generate sensor data alerts. Alerts can be received via an RSS feed, viewed in a Web browser or displayed directly in Proteus via a Web-based API. To advance the vision of making Earth Science data easily discoverable and accessible to end-users, professional or laymen, Proteus is available as open-source on GitHub (https://github.com/intelligentautomation/proteus).
GIS Technologies For The New Planetary Science Archive (PSA)
NASA Astrophysics Data System (ADS)
Docasal, R.; Barbarisi, I.; Rios, C.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; De Marchi, G.; Martinez, S.; Grotheer, E.; Lim, T.; Besse, S.; Heather, D.; Fraga, D.; Barthelemy, M.
2015-12-01
Geographical information system (GIS) is becoming increasingly used for planetary science. GIS are computerised systems for the storage, retrieval, manipulation, analysis, and display of geographically referenced data. Some data stored in the Planetary Science Archive (PSA), for instance, a set of Mars Express/Venus Express data, have spatial metadata associated to them. To facilitate users in handling and visualising spatial data in GIS applications, the new PSA should support interoperability with interfaces implementing the standards approved by the Open Geospatial Consortium (OGC). These standards are followed in order to develop open interfaces and encodings that allow data to be exchanged with GIS Client Applications, well-known examples of which are Google Earth and NASA World Wind as well as open source tools such as Openlayers. The technology already exists within PostgreSQL databases to store searchable geometrical data in the form of the PostGIS extension. An existing open source maps server is GeoServer, an instance of which has been deployed for the new PSA, uses the OGC standards to allow, among others, the sharing, processing and editing of data and spatial data through the Web Feature Service (WFS) standard as well as serving georeferenced map images through the Web Map Service (WMS). The final goal of the new PSA, being developed by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is to create an archive which enables science exploitation of ESA's planetary missions datasets. This can be facilitated through the GIS framework, offering interfaces (both web GUI and scriptable APIs) that can be used more easily and scientifically by the community, and that will also enable the community to build added value services on top of the PSA.
Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices
ERIC Educational Resources Information Center
Schrimpsher, Dan
2011-01-01
As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…
Making large amounts of meteorological plots easily accessible to users
NASA Astrophysics Data System (ADS)
Lamy-Thepaut, Sylvie; Siemen, Stephan; Sahin, Cihan; Raoult, Baudouin
2015-04-01
The European Centre for Medium-Range Weather Forecasts (ECMWF) is an international organisation providing its member organisations with forecasts in the medium time range of 3 to 15 days, and some longer-range forecasts for up to a year ahead, with varying degrees of detail. As part of its mission, ECMWF generates an increasing number of forecast data products for its users. To support the work of forecasters and researchers and to let them make best use of ECMWF forecasts, the Centre also provides tools and interfaces to visualise their products. This allows users to make use of and explore forecasts without having to transfer large amounts of raw data. This is especially true for products based on ECMWF's 50 member ensemble forecast, where some specific processing and visualisation are applied to extract information. Every day, thousands of raw data are being pushed to the ECMWF's interactive web charts application called ecCharts, and thousands of products are processed and pushed to ECMWF's institutional web site ecCharts provides a highly interactive application to display and manipulate recent numerical forecasts to forecasters in national weather services and ECMWF's commercial customers. With ecCharts forecasters are able to explore ECMWF's medium-range forecasts in far greater detail than has previously been possible on the web, and this as soon as the forecast becomes available. All ecCharts's products are also available through a machine-to-machine web map service based on the OGC Web Map Service (WMS) standard. ECMWF institutional web site provides access to a large number of graphical products. It was entirely redesigned last year. It now shares the same infrastructure as ECMWF's ecCharts, and can benefit of some ecCharts functionalities, for example the dashboard. The dashboard initially developed for ecCharts allows users to organise their own collection of products depending on their work flow, and is being further developed. In its first implementation, It presents the user's products in a single interface with fast access to the original product, and possibilities of synchronous animations between them. But its functionalities are being extended to give users the freedom to collect not only ecCharts's 2D maps and graphs, but also other ECMWF Web products such as monthly and seasonal products, scores, and observation monitoring. The dashboard will play a key role to help the user to interpret the large amount of information that ECMWF is providing. This talk will present examples of how the new user interface can organise complex meteorological maps and graphs and show the new possibilities users have gained by using the web as a medium.
Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe
2016-10-01
First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.
NASA Astrophysics Data System (ADS)
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
2007-12-01
The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
Managing the Web-Enhanced Geographic Information Service.
ERIC Educational Resources Information Center
Stephens, Denise
1997-01-01
Examines key management issues involved in delivering geographic information services on the World Wide Web, using the Geographic Information Center (GIC) program at the University of Virginia Library as a reference. Highlights include integrating the Web into services; building collections for Web delivery; and evaluating spatial information…
Goldberg, Howard S; Paterno, Marilyn D; Grundmeier, Robert W; Rocha, Beatriz H; Hoffman, Jeffrey M; Tham, Eric; Swietlik, Marguerite; Schaeffer, Molly H; Pabbathi, Deepika; Deakyne, Sara J; Kuppermann, Nathan; Dayan, Peter S
2016-03-01
To evaluate the architecture, integration requirements, and execution characteristics of a remote clinical decision support (CDS) service used in a multicenter clinical trial. The trial tested the efficacy of implementing brain injury prediction rules for children with minor blunt head trauma. We integrated the Epic(®) electronic health record (EHR) with the Enterprise Clinical Rules Service (ECRS), a web-based CDS service, at two emergency departments. Patterns of CDS review included either a delayed, near-real-time review, where the physician viewed CDS recommendations generated by the nursing assessment, or a real-time review, where the physician viewed recommendations generated by their own documentation. A backstopping, vendor-based CDS triggered with zero delay when no recommendation was available in the EHR from the web-service. We assessed the execution characteristics of the integrated system and the source of the generated recommendations viewed by physicians. The ECRS mean execution time was 0.74 ±0.72 s. Overall execution time was substantially different at the two sites, with mean total transaction times of 19.67 and 3.99 s. Of 1930 analyzed transactions from the two sites, 60% (310/521) of all physician documentation-initiated recommendations and 99% (1390/1409) of all nurse documentation-initiated recommendations originated from the remote web service. The remote CDS system was the source of recommendations in more than half of the real-time cases and virtually all the near-real-time cases. Comparisons are limited by allowable variation in user workflow and resolution of the EHR clock. With maturation and adoption of standards for CDS services, remote CDS shows promise to decrease time-to-trial for multicenter evaluations of candidate decision support interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Customer Decision Making in Web Services with an Integrated P6 Model
NASA Astrophysics Data System (ADS)
Sun, Zhaohao; Sun, Junqing; Meredith, Grant
Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.
CoryneRegNet 4.0 – A reference database for corynebacterial gene regulatory networks
Baumbach, Jan
2007-01-01
Background Detailed information on DNA-binding transcription factors (the key players in the regulation of gene expression) and on transcriptional regulatory interactions of microorganisms deduced from literature-derived knowledge, computer predictions and global DNA microarray hybridization experiments, has opened the way for the genome-wide analysis of transcriptional regulatory networks. The large-scale reconstruction of these networks allows the in silico analysis of cell behavior in response to changing environmental conditions. We previously published CoryneRegNet, an ontology-based data warehouse of corynebacterial transcription factors and regulatory networks. Initially, it was designed to provide methods for the analysis and visualization of the gene regulatory network of Corynebacterium glutamicum. Results Now we introduce CoryneRegNet release 4.0, which integrates data on the gene regulatory networks of 4 corynebacteria, 2 mycobacteria and the model organism Escherichia coli K12. As the previous versions, CoryneRegNet provides a web-based user interface to access the database content, to allow various queries, and to support the reconstruction, analysis and visualization of regulatory networks at different hierarchical levels. In this article, we present the further improved database content of CoryneRegNet along with novel analysis features. The network visualization feature GraphVis now allows the inter-species comparisons of reconstructed gene regulatory networks and the projection of gene expression levels onto that networks. Therefore, we added stimulon data directly into the database, but also provide Web Service access to the DNA microarray analysis platform EMMA. Additionally, CoryneRegNet now provides a SOAP based Web Service server, which can easily be consumed by other bioinformatics software systems. Stimulons (imported from the database, or uploaded by the user) can be analyzed in the context of known transcriptional regulatory networks to predict putative contradictions or further gene regulatory interactions. Furthermore, it integrates protein clusters by means of heuristically solving the weighted graph cluster editing problem. In addition, it provides Web Service based access to up to date gene annotation data from GenDB. Conclusion The release 4.0 of CoryneRegNet is a comprehensive system for the integrated analysis of procaryotic gene regulatory networks. It is a versatile systems biology platform to support the efficient and large-scale analysis of transcriptional regulation of gene expression in microorganisms. It is publicly available at . PMID:17986320
Paterson, Trevor; Law, Andy
2009-08-14
Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. We have developed a simple generic XML schema (GenomicMappingData.xsd - GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data.
Paterson, Trevor; Law, Andy
2009-01-01
Background Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. Results We have developed a simple generic XML schema (GenomicMappingData.xsd – GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. Conclusion The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data. PMID:19682365
Change and Anomaly Detection in Real-Time GPS Data
NASA Astrophysics Data System (ADS)
Granat, R.; Pierce, M.; Gao, X.; Bock, Y.
2008-12-01
The California Real-Time Network (CRTN) is currently generating real-time GPS position data at a rate of 1-2Hz at over 80 locations. The CRTN data presents the possibility of studying dynamical solid earth processes in a way that complements existing seismic networks. To realize this possibility we have developed a prototype system for detecting changes and anomalies in the real-time data. Through this system, we can can correlate changes in multiple stations in order to detect signals with geographical extent. Our approach involves developing a statistical model for each GPS station in the network, and then using those models to segment the time series into a number of discrete states described by the model. We use a hidden Markov model (HMM) to describe the behavior of each station; fitting the model to the data requires neither labeled training examples nor a priori information about the system. As such, HMMs are well suited to this problem domain, in which the data remains largely uncharacterized. There are two main components to our approach. The first is the model fitting algorithm, regularized deterministic annealing expectation- maximization (RDAEM), which provides robust, high-quality results. The second is a web service infrastructure that connects the data to the statistical modeling analysis and allows us to easily present the results of that analysis through a web portal interface. This web service approach facilitates the automatic updating of station models to keep pace with dynamical changes in the data. Our web portal interface is critical to the process of interpreting the data. A Google Maps interface allows users to visually interpret state changes not only on individual stations but across the entire network. Users can drill down from the map interface to inspect detailed results for individual stations, download the time series data, and inspect fitted models. Alternatively, users can use the web portal look at the evolution of changes on the network by moving backwards and forwards in time.
NASA Astrophysics Data System (ADS)
Teng, W.; Chiu, L.; Kempler, S.; Liu, Z.; Nadeau, D.; Rui, H.
2006-12-01
Using NASA satellite remote sensing data from multiple sources for hydrologic applications can be a daunting task and requires a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. In order to facilitate such investigations, the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has developed the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure or "Giovanni," which supports a family of Web interfaces (instances) that allow users to perform interactive visualization and analysis online without downloading any data. Two such Giovanni instances are particularly relevant to hydrologic applications: the Tropical Rainfall Measuring Mission (TRMM) Online Visualization and Analysis System (TOVAS) and the Agricultural Online Visualization and Analysis System (AOVAS), both highly popular and widely used for a variety of applications, including those related to several NASA Applications of National Priority, such as Agricultural Efficiency, Disaster Management, Ecological Forecasting, Homeland Security, and Public Health. Dynamic, context- sensitive Web services provided by TOVAS and AOVAS enable users to seamlessly access NASA data from within, and deeply integrate the data into, their local client environments. One example is between TOVAS and Florida International University's TerraFly, a Web-enabled system that serves a broad segment of the research and applications community, by facilitating access to various textual, remotely sensed, and vector data. Another example is between AOVAS and the U.S. Department of Agriculture Foreign Agricultural Service (USDA FAS)'s Crop Explorer, the primary decision support tool used by FAS to monitor the production, supply, and demand of agricultural commodities worldwide. AOVAS is also part of GES DISC's Agricultural Information System (AIS), which can operationally provide satellite remote sensing data products (e.g., near- real-time rainfall) and analysis services to agricultural users. AIS enables the remote, interoperable access to distributed data, by using the GrADS-Data Server (GDS) and the Open Geospatial Consortium (OGC)- compliant MapServer. The latter allows the access of AIS data from any OGC-compliant client, such as the Earth-Sun System Gateway (ESG) or Google Earth. The Giovanni system is evolving towards a Service- Oriented Architecture and is highly customizable (e.g., adding new products or services), thus availing the hydrologic applications user community of Giovanni's simple-to-use and powerful capabilities to improve decision-making.
NASA Astrophysics Data System (ADS)
Smith, D.; Barnes, R. J.; Morrison, D.; Talaat, E. R.; Potter, M.; Patrone, D.; Weiss, M.; Sarris, T.
2013-12-01
Virtual Observatories are more than data portals that span multiple missions and data sets. They need to provide a system that is useable by a broad swath of people with different backgrounds. The great promise of Virtual Observatories is the ability to perform complex search operations on a large variety of different data sets. This allows the researcher to isolate and select the relevant measurements for their topic of study. The Virtual ITM Observatory (VITMO) is unique in having many diverse datasets that cover a large temporal and spatial range that present a unique search problem. VITMO provides many methods by which the user can search for and select data of interest including restricting selections based on geophysical conditions (solar wind speed, Kp, etc) as well as finding those datasets that overlap in time and/or space. We are developing a series of light-weight web services that will provide a new data search capability for VITMO and other VxOs. The services will consist of a database of spacecraft ephemerides and instrument fields of view; an overlap calculator to find times when the fields of view of different instruments intersect; and a magnetic field line tracing service that will map in situ and ground based measurements to the equatorial plane in magnetic coordinates for a number of field models and geophysical conditions. Each service on their own provides a useful new capability for virtual observatories; operating together they will provide a powerful new search tool. The ephemerides service is being built using the Navigation and Ancillary Information Facility (NAIF) SPICE toolkit (http://naif.jpl.nasa.gov/naif/index.html) allowing them to be extended to support any Earth orbiting satellite with the addition of the appropriate SPICE kernels or two-line element sets (TLE). An instrument kernel (IK) file will be used to describe the observational geometry of the instrument (e.g., Field-of-view size, shape, and orientation). The overlap calculator uses techniques borrowed from computer graphics to identify overlapping measurements in space and time. The calculator will allow a user defined uncertainty to be selected to allow 'near misses' to be found. The magnetic field tracing service will feature a database of pre-calculated field line tracings of ground stations but will also allow dynamic tracing of arbitrary coordinates. These services will allow the non-specialist user of VITMO to select data that they were previously unable to locate, opening up analysis opportunities beyond the instrument teams and making it much easier for future students who come into the field.
Choi, Okkyung; Han, SangYong
2007-01-01
Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Challenges in Visualizing Satellite Level 2 Atmospheric Data with GIS approach
NASA Astrophysics Data System (ADS)
Wei, J. C.; Yang, W.; Zhao, P.; Pham, L.; Meyer, D. J.
2017-12-01
Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. One way to help users better understand the satellite data is to provide data along with `Images', including accurate pixel coverage area delineation, and science team recommended quality screening for individual geophysical parameters. However, there are challenges of visualizing remote sensed non-gridded products: (1) different geodetics of space-borne instruments (2) data often arranged in "along-track" and "across-track" axes (3) spatially and temporally continuous data chunked into granule files: data for a portion (or all) of a satellite orbit (4) no general rule of resampling or interpolations to a grid (5) geophysical retrieval only based on pixel center location without shape information. In this presentation, we will unravel a new Goddard Earth Sciences Data and Information Services Center (GES DISC) Level 2 (L2) visualization on-demand service. The service's front end provides various visualization and data accessing capabilities, such as overlay and swipe of multiply variables and subset and download of data in different formats. The backend of the service consists of Open Geospatial Consortium (OGC) standard-compliant Web Mapping Service (WMS) and Web Coverage Service. The infrastructure allows inclusion of outside data sources served in OGC compliant protocols and allows other interoperable clients, such as ArcGIS clients, to connect to our L2 WCS/WMS.
Challenges in Obtaining and Visualizing Satellite Level 2 Data in GIS
NASA Technical Reports Server (NTRS)
Wei, Jennifer C.; Yang, Wenli; Zhao, Peisheng; Pham, Long; Meyer, David J.
2017-01-01
Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. One way to help users better understand the satellite data is to provide data along with Images, including accurate pixel coverage area delineation, and science team recommended quality screening for individual geophysical parameters. However, there are challenges of visualizing remote sensed non-gridded products: (1) different geodetics of space-borne instruments (2) data often arranged in a long-track� and a cross-track� axes (3) spatially and temporally continuous data chunked into granule files: data for a portion (or all) of a satellite orbit (4) no general rule of resampling or interpolations to a grid (5) geophysical retrieval only based on pixel center location without shape information. In this presentation, we will unravel a new Goddard Earth Sciences Data and Information Services Center (GES DISC) Level 2 (L2) visualization on-demand service. The service's front end provides various visualization and data accessing capabilities, such as overlay and swipe of multiply variables and subset and download of data in different formats. The backend of the service consists of Open Geospatial Consortium (OGC) standard-compliant Web Mapping Service (WMS) and Web Coverage Service. The infrastructure allows inclusion of outside data sources served in OGC compliant protocols and allows other interoperable clients, such as ArcGIS clients, to connect to our L2 WCS/WMS.
The Organizational Role of Web Services
ERIC Educational Resources Information Center
Mitchell, Erik
2011-01-01
The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Web site at http://www.Regulations.gov under e-Docket ID number USCIS-2013- 0003. When submitting... information collection. (2) Title of the Form/Collection: Online Survey of Web Services Employers. (3) Agency...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-16
...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Information Collection: New information collection. (2) Title of the Form/Collection: Online Survey of Web... sector. It is necessary that USCIS obtains data on the E-Verify Program Web Services. Gaining an...
Protecting Database Centric Web Services against SQL/XPath Injection Attacks
NASA Astrophysics Data System (ADS)
Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique
Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.
A Web service substitution method based on service cluster nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Gai, JunJing; Zhou, MengChu
2017-11-01
Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.
Aryanto, K Y E; Broekema, A; Langenhuysen, R G A; Oudkerk, M; van Ooijen, P M A
2015-05-01
To develop and test a fast and easy rule-based web-environment with optional de-identification of imaging data to facilitate data distribution within a hospital environment. A web interface was built using Hypertext Preprocessor (PHP), an open source scripting language for web development, and Java with SQL Server to handle the database. The system allows for the selection of patient data and for de-identifying these when necessary. Using the services provided by the RSNA Clinical Trial Processor (CTP), the selected images were pushed to the appropriate services using a protocol based on the module created for the associated task. Five pipelines, each performing a different task, were set up in the server. In a 75 month period, more than 2,000,000 images are transferred and de-identified in a proper manner while 20,000,000 images are moved from one node to another without de-identification. While maintaining a high level of security and stability, the proposed system is easy to setup, it integrate well with our clinical and research practice and it provides a fast and accurate vendor-neutral process of transferring, de-identifying, and storing DICOM images. Its ability to run different de-identification processes in parallel pipelines is a major advantage in both clinical and research setting.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
NASA Astrophysics Data System (ADS)
Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.
2014-12-01
The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.
Abdulrehman, Dário; Monteiro, Pedro Tiago; Teixeira, Miguel Cacho; Mira, Nuno Pereira; Lourenço, Artur Bastos; dos Santos, Sandra Costa; Cabrito, Tânia Rodrigues; Francisco, Alexandre Paulo; Madeira, Sara Cordeiro; Aires, Ricardo Santos; Oliveira, Arlindo Limede; Sá-Correia, Isabel; Freitas, Ana Teresa
2011-01-01
The YEAst Search for Transcriptional Regulators And Consensus Tracking (YEASTRACT) information system (http://www.yeastract.com) was developed to support the analysis of transcription regulatory associations in Saccharomyces cerevisiae. Last updated in June 2010, this database contains over 48 200 regulatory associations between transcription factors (TFs) and target genes, including 298 specific DNA-binding sites for 110 characterized TFs. All regulatory associations stored in the database were revisited and detailed information on the experimental evidences that sustain those associations was added and classified as direct or indirect evidences. The inclusion of this new data, gathered in response to the requests of YEASTRACT users, allows the user to restrict its queries to subsets of the data based on the existence or not of experimental evidences for the direct action of the TFs in the promoter region of their target genes. Another new feature of this release is the availability of all data through a machine readable web-service interface. Users are no longer restricted to the set of available queries made available through the existing web interface, and can use the web service interface to query, retrieve and exploit the YEASTRACT data using their own implementation of additional functionalities. The YEASTRACT information system is further complemented with several computational tools that facilitate the use of the curated data when answering a number of important biological questions. Since its first release in 2006, YEASTRACT has been extensively used by hundreds of researchers from all over the world. We expect that by making the new data and services available, the system will continue to be instrumental for yeast biologists and systems biology researchers. PMID:20972212
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Web Services as Public Services: Are We Supporting Our Busiest Service Point?
ERIC Educational Resources Information Center
Riley-Huff, Debra A.
2009-01-01
This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)
GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.
Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L
2015-10-15
A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
An Architecture for Autonomic Web Service Process Planning
NASA Astrophysics Data System (ADS)
Moore, Colm; Xue Wang, Ming; Pahl, Claus
Web service composition is a technology that has received considerable attention in the last number of years. Languages and tools to aid in the process of creating composite Web services have been received specific attention. Web service composition is the process of linking single Web services together in order to accomplish more complex tasks. One area of Web service composition that has not received as much attention is the area of dynamic error handling and re-planning, enabling autonomic composition. Given a repository of service descriptions and a task to complete, it is possible for AI planners to automatically create a plan that will achieve this goal. If however a service in the plan is unavailable or erroneous the plan will fail. Motivated by this problem, this paper suggests autonomous re-planning as a means to overcome dynamic problems. Our solution involves automatically recovering from faults and creating a context-dependent alternate plan. We present an architecture that serves as a basis for the central activities autonomous composition, monitoring and fault handling.
A web portal for hydrodynamical, cosmological simulations
NASA Astrophysics Data System (ADS)
Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.
2017-07-01
This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.
Domain-specific Web Service Discovery with Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Caverlee, J; Liu, L
2005-02-14
This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less
Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data
NASA Astrophysics Data System (ADS)
Evans, J. D.; Valente, E. G.
2010-12-01
We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sensors. A scalable architecture based on cloud computing ensures cost-effective, real-time processing and delivery of NPP and other data. Access via standard Web services maximizes its interoperability and usefulness.
Available, intuitive and free! Building e-learning modules using web 2.0 services.
Tam, Chun Wah Michael; Eastwood, Anne
2012-01-01
E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
...; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer... currently valid OMB control number. Proposed Collection: Title: NCI Cancer Genetics Services Directory Web... application form and the Web-based update mailer is to collect information about genetics professionals to be...
Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne
2018-03-23
Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic-Kerry, Catherine King, Kathleen O'Moore, Melinda Achilles, Bridianne O'Dea. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 23.03.2018.
Plug Your Users into Library Resources with OpenSearch Plug-Ins
ERIC Educational Resources Information Center
Baker, Nicholas C.
2007-01-01
To bring the library catalog and other online resources right into users' workspace quickly and easily without needing much more than a short XML file, the author, a reference and Web services librarian at Williams College, learned to build and use OpenSearch plug-ins. OpenSearch is a set of simple technologies and standards that allows the…
COLE: A Web-based Tool for Interfacing with Forest Inventory Data
Patrick Proctor; Linda S. Heath; Paul C. Van Deusen; Jeffery H. Gove; James E. Smith
2005-01-01
We are developing an online computer program to provide forest carbon related estimates for the conterminous United States (COLE). Version 1.0 of the program features carbon estimates based on data from the USDA Forest Service Eastwide Forest Inventory database. The program allows the user to designate an area of interest, and currently provides area, growing-stock...
Shadow netWorkspace: An Open Source Intranet for Learning Communities
ERIC Educational Resources Information Center
Laffey, James M.; Musser, Dale
2006-01-01
Shadow netWorkspace (SNS) is a web application system that allows a school or any type of community to establish an intranet with network workspaces for all members and groups. The goal of SNS has been to make it easy for schools and other educational organizations to provide network services in support of implementing a learning community. SNS is…
PaaS for web applications with OpenShift Origin
NASA Astrophysics Data System (ADS)
Lossent, A.; Rodriguez Peon, A.; Wagner, A.
2017-10-01
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.
SABIO-RK: an updated resource for manually curated biochemical reaction kinetics
Rey, Maja; Weidemann, Andreas; Kania, Renate; Müller, Wolfgang
2018-01-01
Abstract SABIO-RK (http://sabiork.h-its.org/) is a manually curated database containing data about biochemical reactions and their reaction kinetics. The data are primarily extracted from scientific literature and stored in a relational database. The content comprises both naturally occurring and alternatively measured biochemical reactions and is not restricted to any organism class. The data are made available to the public by a web-based search interface and by web services for programmatic access. In this update we describe major improvements and extensions of SABIO-RK since our last publication in the database issue of Nucleic Acid Research (2012). (i) The website has been completely revised and (ii) allows now also free text search for kinetics data. (iii) Additional interlinkages with other databases in our field have been established; this enables users to gain directly comprehensive knowledge about the properties of enzymes and kinetics beyond SABIO-RK. (iv) Vice versa, direct access to SABIO-RK data has been implemented in several systems biology tools and workflows. (v) On request of our experimental users, the data can be exported now additionally in spreadsheet formats. (vi) The newly established SABIO-RK Curation Service allows to respond to specific data requirements. PMID:29092055
Corredor, Iván; Metola, Eduardo; Bernardos, Ana M; Tarrío, Paula; Casar, José R
2014-04-29
In the last few years, many health monitoring systems have been designed to fullfil the needs of a large range of scenarios. Although many of those systems provide good ad hoc solutions, most of them lack of mechanisms that allow them to be easily reused. This paper is focused on describing an open platform, the micro Web of Things Open Platform (µWoTOP), which has been conceived to improve the connectivity and reusability of context data to deliver different kinds of health, wellness and ambient home care services. µWoTOP is based on a resource-oriented architecture which may be embedded in mobile and resource constrained devices enabling access to biometric, ambient or activity sensors and actuator resources through uniform interfaces defined according to a RESTful fashion. Additionally, µWoTOP manages two communication modes which allow delivering user context information according to different methods, depending on the requirements of the consumer application. It also generates alert messages based on standards related to health care and risk management, such as the Common Alerting Protocol, in order to make its outputs compatible with existing systems.
Corredor, Iván; Metola, Eduardo; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.
2014-01-01
In the last few years, many health monitoring systems have been designed to fullfil the needs of a large range of scenarios. Although many of those systems provide good ad hoc solutions, most of them lack of mechanisms that allow them to be easily reused. This paper is focused on describing an open platform, the micro Web of Things Open Platform (µWoTOP), which has been conceived to improve the connectivity and reusability of context data to deliver different kinds of health, wellness and ambient home care services. µWoTOP is based on a resource-oriented architecture which may be embedded in mobile and resource constrained devices enabling access to biometric, ambient or activity sensors and actuator resources through uniform interfaces defined according to a RESTful fashion. Additionally, µWoTOP manages two communication modes which allow delivering user context information according to different methods, depending on the requirements of the consumer application. It also generates alert messages based on standards related to health care and risk management, such as the Common Alerting Protocol, in order to make its outputs compatible with existing systems. PMID:24785542
A Query Integrator and Manager for the Query Web
Brinkley, James F.; Detwiler, Landon T.
2012-01-01
We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.
2017-10-01
One of the STAR experiment’s modular Messaging Interface and Reliable Architecture framework (MIRA) integration goals is to provide seamless and automatic connections with the existing control systems. After an initial proof of concept and operation of the MIRA system as a parallel data collection system for online use and real-time monitoring, the STAR Software and Computing group is now working on the integration of Experimental Physics and Industrial Control System (EPICS) with MIRA’s interfaces. This integration goals are to allow functional interoperability and, later on, to replace the existing/legacy Detector Control System components at the service level. In this report, we describe the evolutionary integration process and, as an example, will discuss the EPICS Alarm Handler conversion. We review the complete upgrade procedure starting with the integration of EPICS-originated alarm signals propagation into MIRA, followed by the replacement of the existing operator interface based on Motif Editor and Display Manager (MEDM) with modern portable web-based Alarm Handler interface. To achieve this aim, we have built an EPICS-to-MQTT [8] bridging service, and recreated the functionality of the original Alarm Handler using low-latency web messaging technologies. The integration of EPICS alarm handling into our messaging framework allowed STAR to improve the DCS alarm awareness of existing STAR DAQ and RTS services, which use MIRA as a primary source of experiment control information.
PubChem promiscuity: a web resource for gathering compound promiscuity data from PubChem.
Canny, Stephanie A; Cruz, Yasel; Southern, Mark R; Griffin, Patrick R
2012-01-01
Promiscuity counts allow for a better understanding of a compound's assay activity profile and drug potential. Although PubChem contains a vast amount of compound and assay data, it currently does not have a convenient or efficient method to obtain in-depth promiscuity counts for compounds. PubChem promiscuity fills this gap. It is a Java servlet that uses NCBI Entrez (eUtils) web services to interact with PubChem and provide promiscuity counts in a variety of categories along with compound descriptors, including PAINS-based functional group detection. http://chemutils.florida.scripps.edu/pcpromiscuity southern@scripps.edu
PubChem promiscuity: a web resource for gathering compound promiscuity data from PubChem
Canny, Stephanie A.; Cruz, Yasel; Southern, Mark R.; Griffin, Patrick R.
2012-01-01
Summary: Promiscuity counts allow for a better understanding of a compound's assay activity profile and drug potential. Although PubChem contains a vast amount of compound and assay data, it currently does not have a convenient or efficient method to obtain in-depth promiscuity counts for compounds. PubChem promiscuity fills this gap. It is a Java servlet that uses NCBI Entrez (eUtils) web services to interact with PubChem and provide promiscuity counts in a variety of categories along with compound descriptors, including PAINS-based functional group detection. Availability: http://chemutils.florida.scripps.edu/pcpromiscuity Contact: southern@scripps.edu PMID:22084255
Software aspects of the Geant4 validation repository
NASA Astrophysics Data System (ADS)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto
2017-10-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Software Aspects of the Geant4 Validation Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotti, Andrea; Wenzel, Hans; Elvira, Daniel
2016-01-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientic Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
NASA Astrophysics Data System (ADS)
Hirst, Paul; Cardenes, Ricardo
2016-08-01
We have developed and deployed a new data archive for the Gemini Observatory. Focused on simplicity and ease of use, the archive provides a number of powerful and novel features including automatic association of calibration data with the science data, and the ability to bookmark searches. A simple but powerful API allows programmatic search and download of data. The archive is hosted on Amazon Web Services, which provides us excellent internet connectivity and significant cost savings in both operations and development over more traditional deployment options. The code is written in python, utilizing a PostgreSQL database and Apache web server.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Fradgley, Elizabeth A; Paul, Christine L; Bryant, Jamie; Roos, Ian A; Henskens, Frans A; Paul, David J
2014-12-19
With increasing attention given to the quality of chronic disease care, a measurement approach that empowers consumers to participate in improving quality of care and enables health services to systematically introduce patient-centered initiatives is needed. A Web-based survey with complex adaptive questioning and interactive survey items would allow consumers to easily identify and prioritize detailed service initiatives. The aim was to develop and test a Web-based survey capable of identifying and prioritizing patient-centered initiatives in chronic disease outpatient services. Testing included (1) test-retest reliability, (2) patient-perceived acceptability of the survey content and delivery mode, and (3) average completion time, completion rates, and Flesch-Kincaid reading score. In Phase I, the Web-based Consumer Preferences Survey was developed based on a structured literature review and iterative feedback from expert groups of service providers and consumers. The touchscreen survey contained 23 general initiatives, 110 specific initiatives available through adaptive questioning, and a relative prioritization exercise. In Phase II, a pilot study was conducted within 4 outpatient clinics to evaluate the reliability properties, patient-perceived acceptability, and feasibility of the survey. Eligible participants were approached to complete the survey while waiting for an appointment or receiving intravenous therapy. The age and gender of nonconsenters was estimated to ascertain consent bias. Participants with a subsequent appointment within 14 days were asked to complete the survey for a second time. A total of 741 of 1042 individuals consented to participate (71.11% consent), 529 of 741 completed all survey content (78.9% completion), and 39 of 68 completed the test-retest component. Substantial or moderate reliability (Cohen's kappa>0.4) was reported for 16 of 20 general initiatives with observed percentage agreement ranging from 82.1%-100.0%. The majority of participants indicated the Web-based survey was easy to complete (97.9%, 531/543) and comprehensive (93.1%, 505/543). Participants also reported the interactive relative prioritization exercise was easy to complete (97.0%, 189/195) and helped them to decide which initiatives were of most importance (84.6%, 165/195). Average completion time was 8.54 minutes (SD 3.91) and the Flesch-Kincaid reading level was 6.8. Overall, 84.6% (447/529) of participants indicated a willingness to complete a similar survey again. The Web-based Consumer Preferences Survey is sufficiently reliable and highly acceptable to patients. Based on completion times and reading level, this tool could be integrated in routine clinical practice and allows consumers to easily participate in quality evaluation. Results provide a comprehensive list of patient-prioritized initiatives for patients with major chronic conditions and delivers practice-ready evidence to guide improvements in patient-centered care.
Detection and Prevention of Insider Threats in Database Driven Web Services
NASA Astrophysics Data System (ADS)
Chumash, Tzvi; Yao, Danfeng
In this paper, we take the first step to address the gap between the security needs in outsourced hosting services and the protection provided in the current practice. We consider both insider and outsider attacks in the third-party web hosting scenarios. We present SafeWS, a modular solution that is inserted between server side scripts and databases in order to prevent and detect website hijacking and unauthorized access to stored data. To achieve the required security, SafeWS utilizes a combination of lightweight cryptographic integrity and encryption tools, software engineering techniques, and security data management principles. We also describe our implementation of SafeWS and its evaluation. The performance analysis of our prototype shows the overhead introduced by security verification is small. SafeWS will allow business owners to significantly reduce the security risks and vulnerabilities of outsourcing their sensitive customer data to third-party providers.
The AstroBID: Searching through the Italian Astronomical Heritage
NASA Astrophysics Data System (ADS)
Cirella, E. O.; Gargano, M.; Gasperini, A.; Mandrino, A.; Randazzo, D.; Zanini, V.
2015-04-01
The scientific heritage held in the National Institute for Astrophysics (INAF), made up of rare and modern books, instruments, and archival documents spanning from the 15th to the early 20th century, marks the milestones in the history of astronomy in Italy. To promote this history of this historical collection, the Libraries and Historical Archives Service and the Museums Service of INAF have developed a project aimed at creating a single web portal: Polvere di stelle. I beni culturali dell'astronomia italiana (Stardust. The cultural heritage of the Italian astronomy). This portal searches for data coming from the libraries, the instruments collections and the historical archives, regarding the heritage of the Italian Observatories. The BID (Books, Instruments, Documents) of the project is the creation of a multimedia web facility, which allows the public to make simultaneous searches on the three different types of materials.
Tetko, Igor V; Maran, Uko; Tropsha, Alexander
2017-03-01
Thousands of (Quantitative) Structure-Activity Relationships (Q)SAR models have been described in peer-reviewed publications; however, this way of sharing seldom makes models available for the use by the research community outside of the developer's laboratory. Conversely, on-line models allow broad dissemination and application representing the most effective way of sharing the scientific knowledge. Approaches for sharing and providing on-line access to models range from web services created by individual users and laboratories to integrated modeling environments and model repositories. This emerging transition from the descriptive and informative, but "static", and for the most part, non-executable print format to interactive, transparent and functional delivery of "living" models is expected to have a transformative effect on modern experimental research in areas of scientific and regulatory use of (Q)SAR models. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Thompson, Jolinda L; Obrig, Kathe S; Abate, Laura E
2013-01-01
Funds made available at the close of the 2010-11 fiscal year allowed purchase of the EBSCO Discovery Service (EDS) for a year-long trial. The appeal of this web-scale discovery product that offers a Google-like interface to library resources was counter-balanced by concerns about quality of search results in an academic health science setting and the challenge of configuring an interface that serves the needs of a diverse group of library users. After initial configuration, usability testing with library users revealed the need for further work before general release. Of greatest concern were continuing issues with the relevance of items retrieved, appropriateness of system-supplied facet terms, and user difficulties with navigating the interface. EBSCO has worked with the library to better understand and identify problems and solutions. External roll-out to users occurred in June 2012.
Building asynchronous geospatial processing workflows with web services
NASA Astrophysics Data System (ADS)
Zhao, Peisheng; Di, Liping; Yu, Genong
2012-02-01
Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.
NASA Astrophysics Data System (ADS)
Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung
2006-03-01
The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.
NASA Astrophysics Data System (ADS)
Bermudez, L. E.; Percivall, G.; Idol, T. A.
2015-12-01
Experts in climate modeling, remote sensing of the Earth, and cyber infrastructure must work together in order to make climate predictions available to decision makers. Such experts and decision makers worked together in the Open Geospatial Consortium's (OGC) Testbed 11 to address a scenario of population displacement by coastal inundation due to the predicted sea level rise. In a Policy Fact Sheet "Harnessing Climate Data to Boost Ecosystem & Water Resilience", issued by White House Office of Science and Technology (OSTP) in December 2014, OGC committed to increase access to climate change information using open standards. In July 2015, the OGC Testbed 11 Urban Climate Resilience activity delivered on that commitment with open standards based support for climate-change preparedness. Using open standards such as the OGC Web Coverage Service and Web Processing Service and the NetCDF and GMLJP2 encoding standards, Testbed 11 deployed an interoperable high-resolution flood model to bring climate model outputs together with global change assessment models and other remote sensing data for decision support. Methods to confirm model predictions and to allow "what-if-scenarios" included in-situ sensor webs and crowdsourcing. A scenario was in two locations: San Francisco Bay Area and Mozambique. The scenarios demonstrated interoperation and capabilities of open geospatial specifications in supporting data services and processing services. The resultant High Resolution Flood Information System addressed access and control of simulation models and high-resolution data in an open, worldwide, collaborative Web environment. The scenarios examined the feasibility and capability of existing OGC geospatial Web service specifications in supporting the on-demand, dynamic serving of flood information from models with forecasting capacity. Results of this testbed included identification of standards and best practices that help researchers and cities deal with climate-related issues. Results of the testbeds will now be deployed in pilot applications. The testbed also identified areas of additional development needed to help identify scientific investments and cyberinfrastructure approaches needed to improve the application of climate science research results to urban climate resilence.
Research of three level match method about semantic web service based on ontology
NASA Astrophysics Data System (ADS)
Xiao, Jie; Cai, Fang
2011-10-01
An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.
Interoperability And Value Added To Earth Observation Data
NASA Astrophysics Data System (ADS)
Gasperi, J.
2012-04-01
Geospatial web services technology has provided a new means for geospatial data interoperability. Open Geospatial Consortium (OGC) services such as Web Map Service (WMS) to request maps on the Internet, Web Feature Service (WFS) to exchange vectors or Catalog Service for the Web (CSW) to search for geospatialized data have been widely adopted in the Geosciences community in general and in the remote sensing community in particular. These services make Earth Observation data available to a wider range of public users than ever before. The mapshup web client offers an innovative and efficient user interface that takes advantage of the power of interoperability. This presentation will demonstrate how mapshup can be effectively used in the context of natural disasters management.
Symmetric Key Services Markup Language (SKSML)
NASA Astrophysics Data System (ADS)
Noor, Arshad
Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.
39 CFR 3001.12 - Service of documents.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or presiding officer has determined is unable to receive service through the Commission's Web site... presiding officer has determined is unable to receive service through the Commission Web site shall be by... service list for each current proceeding will be available on the Commission's Web site http://www.prc.gov...
Food Web Designer: a flexible tool to visualize interaction networks.
Sint, Daniela; Traugott, Michael
Species are embedded in complex networks of ecological interactions and assessing these networks provides a powerful approach to understand what the consequences of these interactions are for ecosystem functioning and services. This is mandatory to develop and evaluate strategies for the management and control of pests. Graphical representations of networks can help recognize patterns that might be overlooked otherwise. However, there is a lack of software which allows visualizing these complex interaction networks. Food Web Designer is a stand-alone, highly flexible and user friendly software tool to quantitatively visualize trophic and other types of bipartite and tripartite interaction networks. It is offered free of charge for use on Microsoft Windows platforms. Food Web Designer is easy to use without the need to learn a specific syntax due to its graphical user interface. Up to three (trophic) levels can be connected using links cascading from or pointing towards the taxa within each level to illustrate top-down and bottom-up connections. Link width/strength and abundance of taxa can be quantified, allowing generating fully quantitative networks. Network datasets can be imported, saved for later adjustment and the interaction webs can be exported as pictures for graphical display in different file formats. We show how Food Web Designer can be used to draw predator-prey and host-parasitoid food webs, demonstrating that this software is a simple and straightforward tool to graphically display interaction networks for assessing pest control or any other type of interaction in both managed and natural ecosystems from an ecological network perspective.
ChemCalc: a building block for tomorrow's chemical infrastructure.
Patiny, Luc; Borel, Alain
2013-05-24
Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).
Can They Plan to Teach with Web 2.0? Future Teachers' Potential Use of the Emerging Web
ERIC Educational Resources Information Center
Kale, Ugur
2014-01-01
This study examined pre-service teachers' potential use of Web 2.0 technologies for teaching. A coding scheme incorporating the Technological Pedagogical Content Knowledge (TPACK) framework guided the analysis of pre-service teachers' Web 2.0-enhanced learning activity descriptions. The results indicated that while pre-service teachers were able…
EnviroAtlas -Durham, NC- One Meter Resolution Urban Area Land Cover Map (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The EnviroAtlas Durham, NC land cover map was generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near infrared) aerial photography from July 2010 at 1 m spatial resolution. Five land cover classes were mapped: impervious surface, soil and barren, grass and herbaceous, trees and forest, and water. An accuracy assessment using a stratified random sampling of 500 samples yielded an overall accuracy of 83 percent using a minimum mapping unit of 9 pixels (3x3 pixel window). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Durham, and includes the cities of Durham, Chapel Hill, Carrboro and Hillsborough, NC. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas ) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets ).
EnviroAtlas -Milwaukee, WI- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The EnviroAtlas Milwaukee, WI land cover data and map were generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near infrared) aerial photography from Late Summer 2010 at 1 m spatial resolution. Nine land cover classes were mapped: water, impervious surfaces (dark and light), soil and barren land, trees and forest, grass and herbaceous non-woody vegetation, agriculture, and wetlands (woody and emergent). An accuracy assessment using a completely random sampling of 600 samples yielded an overall accuracy of 85.39% percent using a minimum mapping unit of 9 pixels (3x3 pixel window). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Milwaukee. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-
EnviroAtlas -- Woodbine, IA -- One Meter Resolution Urban Land Cover Data (2011) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The EnviroAtlas Woodbine, IA land cover (LC) data and map were generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near infrared) aerial photography from Late Summer 2011 at 1 m spatial resolution. Six land cover classes were mapped: water, impervious surfaces (dark and light), soil and barren land, trees and forest, grass and herbaceous non-woody vegetation, and agriculture. An accuracy assessment using a completely random sampling of 600 samples yielded an overall accuracy of 87.03% percent using a minimum mapping unit of 9 pixels (3x3 pixel window). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Woodbine. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
EnviroAtlas -Portland, ME- One Meter Resolution Urban Land Cover (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The Portland, ME land cover map was generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near infrared) aerial photography from Late Summer 2010 at 1 m spatial resolution. Nine land cover classes were mapped: water, impervious surfaces (dark and light), soil and barren land, trees and forest, grass and herbaceous non-woody vegetation, agriculture, and wetlands (woody and emergent). An accuracy assessment using a stratified random sampling of 600 samples yielded an overall accuracy of 87.5 percent using a minimum mapping unit of 9 pixels (3x3 pixel window). The area mapped is defined by the US Census Bureau's 2010 Urban Statistical Area for Portland.This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Mobile Cloud Computing with SOAP and REST Web Services
NASA Astrophysics Data System (ADS)
Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid
2018-05-01
Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.
Adopting and adapting a commercial view of web services for the Navy
NASA Astrophysics Data System (ADS)
Warner, Elizabeth; Ladner, Roy; Katikaneni, Uday; Petry, Fred
2005-05-01
Web Services are being adopted as the enabling technology to provide net-centric capabilities for many Department of Defense operations. The Navy Enterprise Portal, for example, is Web Services-based, and the Department of the Navy is promulgating guidance for developing Web Services. Web Services, however, only constitute a baseline specification that provides the foundation on which users, under current approaches, write specialized applications in order to retrieve data over the Internet. Application development may increase dramatically as the number of different available Web Services increases. Reasons for specialized application development include XML schema versioning differences, adoption/use of diverse business rules, security access issues, and time/parameter naming constraints, among others. We are currently developing for the US Navy a system which will improve delivery of timely and relevant meteorological and oceanographic (MetOc) data to the warfighter. Our objective is to develop an Advanced MetOc Broker (AMB) that leverages Web Services technology to identify, retrieve and integrate relevant MetOc data in an automated manner. The AMB will utilize a Mediator, which will be developed by applying ontological research and schema matching techniques to MetOc forms of data. The AMB, using the Mediator, will support a new, advanced approach to the use of Web Services; namely, the automated identification, retrieval and integration of MetOc data. Systems based on this approach will then not require extensive end-user application development for each Web Service from which data can be retrieved. Users anywhere on the globe will be able to receive timely environmental data that fits their particular needs.
Discovery Mechanisms for the Sensor Web
Jirka, Simon; Bröring, Arne; Stasch, Christoph
2009-01-01
This paper addresses the discovery of sensors within the OGC Sensor Web Enablement framework. Whereas services like the OGC Web Map Service or Web Coverage Service are already well supported through catalogue services, the field of sensor networks and the according discovery mechanisms is still a challenge. The focus within this article will be on the use of existing OGC Sensor Web components for realizing a discovery solution. After discussing the requirements for a Sensor Web discovery mechanism, an approach will be presented that was developed within the EU funded project “OSIRIS”. This solution offers mechanisms to search for sensors, exploit basic semantic relationships, harvest sensor metadata and integrate sensor discovery into already existing catalogues. PMID:22574038
AdaFF: Adaptive Failure-Handling Framework for Composite Web Services
NASA Astrophysics Data System (ADS)
Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong
In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.
Enhancements to TauDEM to support Rapid Watershed Delineation Services
NASA Astrophysics Data System (ADS)
Sazib, N. S.; Tarboton, D. G.
2015-12-01
Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.
Performances and recent evolutions of EMSC Real Time Information services
NASA Astrophysics Data System (ADS)
Mazet-Roux, G.; Godey, S.; Bossu, R.
2009-04-01
The EMSC (http://www.emsc-csem.org) operates Real Time Earthquake Information services for the public and the scientific community which aim at providing rapid and reliable information on the seismic-ity of the Euro-Mediterranean region and on significant earthquakes worldwide. These services are based on parametric data rapidly provided by 66 seismological networks which are automatically merged and processed at EMSC. A web page which is updated every minute displays a list and a map of the latest earthquakes as well as additional information like location maps, moment tensors solutions or past regional seismicity. Since 2004, the performances and the popularity of these services have dramatically increased. The number of messages received from the contributors and the number of published events have been multiplied by 2 since 2004 and by 1.6 since 2005 respectively. The web traffic and the numbers of users of the Earthquake Notification Service (ENS) have been multiplied by 15 and 7 respectively. In terms of performances of the ENS, the median dissemination time for Euro-Med events is minutes in 2008. In order to further improve its performances and especially the speed and robustness of the reception of real time data, EMSC has recently implemented a software named QWIDS (Quake Watch Information Distribution System) which provides a quick and robust data exchange system through permanent TCP connections. At the difference with emails that can sometimes be delayed or lost, QWIDS is an actual real time communication system that ensures the data delivery. In terms of hardware, EMSC imple-mented a high availability, dynamic load balancing, redundant and scalable web servers infrastructure, composed of two SUN T2000 and one F5 BIG-IP switch. This will allow coping with constantly increas-ing web traffic and the occurrence of huge peaks of traffic after widely felt earthquakes.
A New Approach for Semantic Web Matching
NASA Astrophysics Data System (ADS)
Zamanifar, Kamran; Heidary, Golsa; Nematbakhsh, Naser; Mardukhi, Farhad
In this work we propose a new approach for semantic web matching to improve the performance of Web Service replacement. Because in automatic systems we should ensure the self-healing, self-configuration, self-optimization and self-management, all services should be always available and if one of them crashes, it should be replaced with the most similar one. Candidate services are advertised in Universal Description, Discovery and Integration (UDDI) all in Web Ontology Language (OWL). By the help of bipartite graph, we did the matching between the crashed service and a Candidate one. Then we chose the best service, which had the maximum rate of matching. In fact we compare two services' functionalities and capabilities to see how much they match. We found that the best way for matching two web services, is comparing the functionalities of them.
Climatological Data Option in My Weather Impacts Decision Aid (MyWIDA) Overview
2017-07-18
rules. It consists of 2 databases, a data service server, a collection of web service, and web applications that show weather impacts on selected...3.1.2 ClimoDB 5 3.2 Data Service 5 3.2.1 Data Requestor 5 3.2.2 Data Decoder 6 3.2.3 Post Processor 6 3.2.4 Job Scheduler 6 3.3 Web Service 6...6.1 Additional Data Option 9 6.2 Impact Overlay Web Service 9 6.3 Graphical User Interface 9 7. References 10 List of Symbols, Abbreviations, and
Galpin, Adam; Meredith, Joanne; Ure, Cathy; Robinson, Leslie
2017-10-27
The decision around whether to attend breast cancer screening can often involve making sense of confusing and contradictory information on its risks and benefits. The Word of Mouth Mammogram e-Network (WoMMeN) project was established to create a Web-based resource to support decision making regarding breast cancer screening. This paper presents data from our user-centered approach in engaging stakeholders (both health professionals and service users) in the design of this Web-based resource. Our novel approach involved creating a user design group within Facebook to allow them access to ongoing discussion between researchers, radiographers, and existing and potential service users. This study had two objectives. The first was to examine the utility of an online user design group for generating insight for the creation of Web-based health resources. We sought to explore the advantages and limitations of this approach. The second objective was to analyze what women want from a Web-based resource for breast cancer screening. We recruited a user design group on Facebook and conducted a survey within the group, asking questions about design considerations for a Web-based breast cancer screening hub. Although the membership of the Facebook group varied over time, there were 71 members in the Facebook group at the end point of analysis. We next conducted a framework analysis on 70 threads from Facebook and a thematic analysis on the 23 survey responses. We focused additionally on how the themes were discussed by the different stakeholders within the context of the design group. Two major themes were found across both the Facebook discussion and the survey data: (1) the power of information and (2) the hub as a place for communication and support. Information was considered as empowering but also recognized as threatening. Communication and the sharing of experiences were deemed important, but there was also recognition of potential miscommunication within online discussion. Health professionals and service users expressed the same broad concerns, but there were subtle differences in their opinions. Importantly, the themes were triangulated between the Facebook discussions and the survey data, supporting the validity of an online user design group. Online user design groups afford a useful method for understanding stakeholder needs. In contrast to focus groups, they afford access to users from diverse geographical locations and traverse time constraints, allowing more follow-ups to responses. The use of Facebook provides a familiar and naturalistic setting for discussion. Although we acknowledge the limitations in the sample, this approach has allowed us to understand the views of stakeholders in the user-centered design of the WoMMeN hub for breast cancer screening. ©Adam Galpin, Joanne Meredith, Cathy Ure, Leslie Robinson. Originally published in JMIR Cancer (http://cancer.jmir.org), 27.10.2017.
Enabling Tools and Methods for International, Inter-disciplinary and Educational Collaboration
NASA Astrophysics Data System (ADS)
Robinson, E. M.; Hoijarvi, K.; Falke, S.; Fialkowski, E.; Kieffer, M.; Husar, R. B.
2008-05-01
In the past, collaboration has taken place in tightly-knit workgroups where the members had direct connections to each other. Such collaboration was confined to small workgroups and person-to-person communication. Recent developments through the Internet foster virtual workgroups and organizations where dynamic, 'just-in-time' collaboration can take place over a much larger scale. The emergence of virtual workgroups has strongly influenced the interaction of inter-national, inter-disciplinary, as well as educational activities. In this paper we present an array of enabling tools and methods that incorporate the new technologies including web services, software mashups, tag-based structuring and searching, and wikis for collaborative writing and content organization. Large monolithic, 'do-it-all' software tools are giving way to web service modules, combined through service chaining. Application software can now be created using Service Oriented Architecture (SOA). In the air quality community, data providers and users are distributed in space and time creating barriers for data access. By exposing the data on the internet the space, time barriers are lessened. The federated data system, DataFed, developed at Washington University, accesses data from autonomous, distributed providers. Through data "wrappers", DataFed provides uniform and standards-based access services to heterogeneous, distributed data. Service orientation not only lowers the entry resistance for service providers, but it also allows the creation of user-defined applications and/or mashups. For example, Google Earth's open API allowed many groups to mash their content with Google Earth. Ad hoc tagging gives a rich description of the internet resources, but it has the disadvantage of providing a fuzzy schema. The semantic uniformity of the internet resources can be improved by controlled tagging which apply a consistent namespace and tag combinations to diverse objects. One example of this is the photo-sharing web application Flickr. Just like data, by exposing photos through the internet those can be reused in ways unknown and unanticipated by the provider. For air quality application, Flickr allowed a rich collection of images of forest fire smoke, wind blown dust and haze events to be tagged with controlled tags and used in for evaluating subtle features of the events. Wikis, originally used just for collaboratively writing and discuss documents, are now also a social software workflow managers. In air quality data, wikis provides the means to collaboratively create rich metadata. Wikis become a virtual meeting place to discuss ideas before a workshop of conference, display tagged internet resources, and collaboratively work on documents. Wikis are also useful in the classroom. For instance in class projects, the wiki displays harvested resources, maintains collaborative documents and discussions and is the organizational memory for the project.
Design and implementation of CUAHSI WaterML and WaterOneFlow Web Services
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Zaslavsky, I.; Whitenack, T.; Maidment, D.
2007-12-01
WaterOneFlow is a term for a group of web services created by and for the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) community. CUAHSI web services facilitate the retrieval of hydrologic observations information from online data sources using the SOAP protocol. CUAHSI Water Markup Language (below referred to as WaterML) is an XML schema defining the format of messages returned by the WaterOneFlow web services. \
Processing biological literature with customizable Web services supporting interoperable formats.
Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia
2014-01-01
Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. © The Author(s) 2014. Published by Oxford University Press.
Processing biological literature with customizable Web services supporting interoperable formats
Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia
2014-01-01
Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225
Exploring NASA GES DISC Data with Interoperable Services
NASA Technical Reports Server (NTRS)
Zhao, Peisheng; Yang, Wenli; Hegde, Mahabal; Wei, Jennifer C.; Kempler, Steven; Pham, Long; Teng, William; Savtchenko, Andrey
2015-01-01
Overview of NASA GES DISC (NASA Goddard Earth Science Data and Information Services Center) data with interoperable services: Open-standard and Interoperable Services Improve data discoverability, accessibility, and usability with metadata, catalogue and portal standards Achieve data, information and knowledge sharing across applications with standardized interfaces and protocols Open Geospatial Consortium (OGC) Data Services and Specifications Web Coverage Service (WCS) -- data Web Map Service (WMS) -- pictures of data Web Map Tile Service (WMTS) --- pictures of data tiles Styled Layer Descriptors (SLD) --- rendered styles.
31 CFR 515.578 - Exportation of certain services incident to Internet-based communications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Internet, such as instant messaging, chat and email, social networking, sharing of photos and movies, web... direct or indirect exportation of web-hosting services that are for purposes other than personal communications (e.g., web-hosting services for commercial endeavors) or of domain name registration services. (4...
31 CFR 515.578 - Exportation of certain services incident to Internet-based communications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Internet, such as instant messaging, chat and email, social networking, sharing of photos and movies, web... direct or indirect exportation of web-hosting services that are for purposes other than personal communications (e.g., web-hosting services for commercial endeavors) or of domain name registration services. (4...
jORCA: easily integrating bioinformatics Web Services.
Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo
2010-02-15
Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .
National Centers for Environmental Prediction
. Government's official Web portal to all Federal, state and local government Web resources and services. MISSION Web Page [scroll down to "Verification" Section] HRRR Verification at NOAA ESRL HRRR Web Verification Web Page NOAA / National Weather Service National Centers for Environmental Prediction
Component, Context, and Manufacturing Model Library (C2M2L)
2012-11-01
123 5.1 MML Population and Web Service Interface...104 Table 41. Relevant Questions with Associated Web Services...the models, and implementing web services that provide semantically aware programmatic access to the models, including implementing the MS&T
WebGLORE: a web service for Grid LOgistic REgression.
Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian
2013-12-15
WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.
2011-01-01
Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447
Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke
2011-10-24
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.
Sarkar, Subhra; Witham, Shawn; Zhang, Jie; Zhenirovskyy, Maxim; Rocchia, Walter; Alexov, Emil
2011-01-01
Here we report a web server, the DelPhi web server, which utilizes DelPhi program to calculate electrostatic energies and the corresponding electrostatic potential and ionic distributions, and dielectric map. The server provides extra services to fix structural defects, as missing atoms in the structural file and allows for generation of missing hydrogen atoms. The hydrogen placement and the corresponding DelPhi calculations can be done with user selected force field parameters being either Charmm22, Amber98 or OPLS. Upon completion of the calculations, the user is given option to download fixed and protonated structural file, together with the parameter and Delphi output files for further analysis. Utilizing Jmol viewer, the user can see the corresponding structural file, to manipulate it and to change the presentation. In addition, if the potential map is requested to be calculated, the potential can be mapped onto the molecule surface. The DelPhi web server is available from http://compbio.clemson.edu/delphi_webserver. PMID:24683424
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
NASA Astrophysics Data System (ADS)
Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.
Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
Web-services-based spatial decision support system to facilitate nuclear waste siting
NASA Astrophysics Data System (ADS)
Huang, L. Xinglai; Sheng, Grant
2006-10-01
The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2009-04-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user`s own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2008-12-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology- specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user's own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
Availability of the OGC geoprocessing standard: March 2011 reality check
NASA Astrophysics Data System (ADS)
Lopez-Pellicer, Francisco J.; Rentería-Agualimpia, Walter; Béjar, Rubén; Muro-Medrano, Pedro R.; Zarazaga-Soria, F. Javier
2012-10-01
This paper presents an investigation about the servers available in March 2011 conforming to the Web Processing Service interface specification published by the geospatial standards organization Open Geospatial Consortium (OGC) in 2007. This interface specification gives support to standard Web-based geoprocessing. The data used in this research were collected using a focused crawler configured for finding OGC Web services. The research goals are (i) to provide a reality check of the availability of Web Processing Service servers, (ii) to provide quantitative data about the use of different features defined in the standard that are relevant for a scalable Geoprocessing Web (e.g. long-running processes, Web-accessible data outputs), and (iii) to test if the advances in the use of search engines and focused crawlers for finding Web services can be applied for finding geoscience processing systems. Research results show the feasibility of the discovery approach and provide data about the implementation of the Web Processing Service specification. These results also show extensive use of features related to scalability, except for those related to technical and semantic interoperability.
Maintenance and Exchange of Learning Objects in a Web Services Based e-Learning System
ERIC Educational Resources Information Center
Vossen, Gottfried; Westerkamp, Peter
2004-01-01
"Web services" enable partners to exploit applications via the Internet. Individual services can be composed to build new and more complex ones with additional and more comprehensive functionality. In this paper, we apply the Web service paradigm to electronic learning, and show how to exchange and maintain learning objects is a…
NASA Astrophysics Data System (ADS)
Manuaba, I. B. P.; Rudiastini, E.
2018-01-01
Assessment of lecturers is a tool used to measure lecturer performance. Lecturer’s assessment variable can be measured from three aspects : teaching activities, research and community service. Broad aspect to measure the performance of lecturers requires a special framework, so that the system can be developed in a sustainable manner. Issues of this research is to create a API web service data tool, so the lecturer assessment system can be developed in various frameworks. The research was developed with web service and php programming language with the output of json extension data. The conclusion of this research is API web service data application can be developed using several platforms such as web, mobile application
Building a print on demand web service
NASA Astrophysics Data System (ADS)
Reddy, Prakash; Rozario, Benedict; Dudekula, Shariff; V, Anil Dev
2011-03-01
There is considerable effort underway to digitize all books that have ever been printed. There is need for a service that can take raw book scans and convert them into Print on Demand (POD) books. Such a service definitely augments the digitization effort and enables broader access to a wider audience. To make this service practical we have identified three key challenges that needed to be addressed. These are: a) produce high quality image images by eliminating artifacts that exist due to the age of the document or those that are introduced during the scanning process b) develop an efficient automated system to process book scans with minimum human intervention; and c) build an eco system which allows us the target audience to discover these books.
EnviroAtlas -Phoenix, AZ- One Meter Resolution Urban Land Cover Data (2010) Web Service
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The EnviroAtlas Phoenix, AZ land cover data and map were generated from USDA NAIP (National Agricultural Imagery Program) four band (red, green, blue and near-infrared) aerial photography taken from June through September, 2010 at 1 m spatial resolution. Seven land cover classes were mapped: water, impervious surfaces, soil and barren land, trees and forest, shrubland, grass and herbaceous non-woody vegetation, and agriculture. An accuracy assessment using a completely random sampling of 598 land cover reference points yielded an overall accuracy of 69.2%. The area mapped includes the entirety of the Central Arizona-Phoenix Long-Term Ecological Research (CAP-LTER) area, which was classified by the Environmental Remote Sensing and Geoinformatics Lab (ERSG) at Arizona State University. The land cover dataset also includes an area of approximately 625 square kilometers which is located north of Phoenix. This section was classified by the EPA land cover classification team. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E
2011-05-01
Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.
Implementation of Sensor Twitter Feed Web Service Server and Client
2016-12-01
ARL-TN-0807 ● DEC 2016 US Army Research Laboratory Implementation of Sensor Twitter Feed Web Service Server and Client by...Implementation of Sensor Twitter Feed Web Service Server and Client by Bhagyashree V Kulkarni University of Maryland Michael H Lee Computational...
Advanced Strategic and Tactical Relay Request Management for the Mars Relay Operations Service
NASA Technical Reports Server (NTRS)
Allard, Daniel A.; Wallick, Michael N.; Gladden, Roy E.; Wang, Paul; Hy, Franklin H.
2013-01-01
This software provides a new set of capabilities for the Mars Relay Operations Service (MaROS) in support of Strategic and Tactical relay, including a highly interactive relay request Web user interface, mission control over relay planning time periods, and mission management of allowed strategic vs. tactical request parameters. Together, these new capabilities expand the scope of the system to include all elements critical for Tactical relay operations. Planning of replay activities spans a time period that is split into two distinct phases. The first phase is called Strategic, which begins at the time that relay opportunities are identified, and concludes at the point that the orbiter generates the flight sequences for on board execution. Any relay request changes from this point on are called Tactical. Tactical requests, otherwise called Orbit - er Relay State Changes (ORSC), are highly restricted in terms of what types of changes can be made, and the types of parameters that can be changed may differ from one orbiter to the next. For example, one orbiter may be able to delay the start of a relay request, while another may not. The legacy approach to ORSC management involves exchanges of e-mail with "requests for change" and "acknowledgement of approval," with no other tracking of changes outside of e-mail folders. MaROS Phases 1 and 2 provided the infrastructure for strategic relay for all supported missions. This new version, 3.0, introduces several capabilities that fully expand the scope of the system to include tactical relay. One new feature allows orbiter users to manage and "lock" Planning Periods, which allows the orbiter team to formalize the changeover from Strategic to Tactical operations. Another major feature allows users to interactively submit tactical request changes via a Web user interface. A third new feature allows orbiter missions to specify allowed tactical updates, which are automatically incorporated into the tactical change process. This software update is significant in that it provides the only centralized service for tactical request management available for relay missions.
Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-11-01
To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Web services as applications' integration tool: QikProp case study.
Laoui, Abdel; Polyakov, Valery R
2011-07-15
Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.