An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.
An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm
Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya
2015-01-01
Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894
AdaFF: Adaptive Failure-Handling Framework for Composite Web Services
NASA Astrophysics Data System (ADS)
Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong
In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.
A Privacy Access Control Framework for Web Services Collaboration with Role Mechanisms
NASA Astrophysics Data System (ADS)
Liu, Linyuan; Huang, Zhiqiu; Zhu, Haibin
With the popularity of Internet technology, web services are becoming the most promising paradigm for distributed computing. This increased use of web services has meant that more and more personal information of consumers is being shared with web service providers, leading to the need to guarantee the privacy of consumers. This paper proposes a role-based privacy access control framework for Web services collaboration, it utilizes roles to specify the privacy privileges of services, and considers the impact on the reputation degree of the historic experience of services in playing roles. Comparing to the traditional privacy access control approaches, this framework can make the fine-grained authorization decision, thus efficiently protecting consumers' privacy.
Framework for Supporting Web-Based Collaborative Applications
NASA Astrophysics Data System (ADS)
Dai, Wei
The article proposes an intelligent framework for supporting Web-based applications. The framework focuses on innovative use of existing resources and technologies in the form of services and takes the leverage of theoretical foundation of services science and the research from services computing. The main focus of the framework is to deliver benefits to users with various roles such as service requesters, service providers, and business owners to maximize their productivity when engaging with each other via the Web. The article opens up with research motivations and questions, analyses the existing state of research in the field, and describes the approach in implementing the proposed framework. Finally, an e-health application is discussed to evaluate the effectiveness of the framework where participants such as general practitioners (GPs), patients, and health-care workers collaborate via the Web.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
BioServices: a common Python package to access biological Web Services programmatically.
Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio
2013-12-15
Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.
SIDECACHE: Information access, management and dissemination framework for web services.
Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A
2011-06-14
Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework
NASA Astrophysics Data System (ADS)
King, T. A.; Walker, R. J.; Weigel, R. S.; Narock, T. W.; McGuire, R. E.; Candey, R. M.
2011-12-01
The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework is a configurable service oriented framework to enable the discovery, access and analysis of data shared in a community. The SEEKR framework integrates many existing independent services through the use of web technologies and standard metadata. Services are hosted on systems by using an application server and are callable by using REpresentational State Transfer (REST) protocols. Messages and metadata are transferred with eXtensible Markup Language (XML) encoding which conform to a published XML schema. Space Physics Archive Search and Extract (SPASE) metadata is central to utilizing the services. Resources (data, documents, software, etc.) are described with SPASE and the associated Resource Identifier is used to access and exchange resources. The configurable options for the service can be set by using a web interface. Services are packaged as web application resource (WAR) files for direct deployment on application services such as Tomcat or Jetty. We discuss the composition of the SEEKR framework, how new services can be integrated and the steps necessary to deploying the framework. The SEEKR Framework emerged from NASA's Virtual Magnetospheric Observatory (VMO) and other systems and we present an overview of these systems from a SEEKR Framework perspective.
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Real-time GIS data model and sensor web service platform for environmental data management.
Gong, Jianya; Geng, Jing; Chen, Zeqiang
2015-01-09
Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.
NASA Astrophysics Data System (ADS)
Manuaba, I. B. P.; Rudiastini, E.
2018-01-01
Assessment of lecturers is a tool used to measure lecturer performance. Lecturer’s assessment variable can be measured from three aspects : teaching activities, research and community service. Broad aspect to measure the performance of lecturers requires a special framework, so that the system can be developed in a sustainable manner. Issues of this research is to create a API web service data tool, so the lecturer assessment system can be developed in various frameworks. The research was developed with web service and php programming language with the output of json extension data. The conclusion of this research is API web service data application can be developed using several platforms such as web, mobile application
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
Enhancing the AliEn Web Service Authentication
NASA Astrophysics Data System (ADS)
Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping
2011-12-01
Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.
A Case Study in Web 2.0 Application Development
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.
2010-12-01
Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.
NASA Astrophysics Data System (ADS)
Roccatello, E.; Nozzi, A.; Rumor, M.
2013-05-01
This paper illustrates the key concepts behind the design and the development of a framework, based on OGC services, capable to visualize 3D large scale geospatial data streamed over the web. WebGISes are traditionally bounded to a bi-dimensional simplified representation of the reality and though they are successfully addressing the lack of flexibility and simplicity of traditional desktop clients, a lot of effort is still needed to reach desktop GIS features, like 3D visualization. The motivations behind this work lay in the widespread availability of OGC Web Services inside government organizations and in the technology support to HTML 5 and WebGL standard of the web browsers. This delivers an improved user experience, similar to desktop applications, therefore allowing to augment traditional WebGIS features with a 3D visualization framework. This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.
The GBT Dynamic Scheduling System: Powered by the Web
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; McCarty, M.; Sessoms, E.; Shelton, A.
2009-09-01
The web technologies utilized for the Robert C. Byrd Green Bank Telescope's (GBT) new Dynamic Scheduling System are discussed, focusing on languages, frameworks, and tools. We use a popular Python web framework, TurboGears, to take advantage of the extensive web services the system provides. TurboGears is a model-view-controller framework, which aggregates SQLAlchemy, Genshi, and CherryPy respectively. On top of this framework, Javascript (Prototype, script.aculo.us, and JQuery) and cascading style sheets (Blueprint) are used for desktop-quality web pages.
User Needs of Digital Service Web Portals: A Case Study
ERIC Educational Resources Information Center
Heo, Misook; Song, Jung-Sook; Seol, Moon-Won
2013-01-01
The authors examined the needs of digital information service web portal users. More specifically, the needs of Korean cultural portal users were examined as a case study. The conceptual framework of a web-based portal is that it is a complex, web-based service application with characteristics of information systems and service agents. In…
A Framework for Integrating Oceanographic Data Repositories
NASA Astrophysics Data System (ADS)
Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.
2010-12-01
Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.
ERIC Educational Resources Information Center
Gomez, Fabinton Sotelo; Ordóñez, Armando
2016-01-01
Previously a framework for integrating web resources providing educational services in dotLRN was presented. The present paper describes the application of this framework in a rural school in Cauca--Colombia. The case study includes two web resources about the topic of waves (physics) which is oriented in secondary education. Web classes and…
NASA Astrophysics Data System (ADS)
Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.
Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
A flexible geospatial sensor observation service for diverse sensor data based on Web service
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Yu, Genong; Min, Min
Achieving a flexible and efficient geospatial Sensor Observation Service (SOS) is difficult, given the diversity of sensor networks, the heterogeneity of sensor data storage, and the differing requirements of users. This paper describes development of a service-oriented multi-purpose SOS framework. The goal is to create a single method of access to the data by integrating the sensor observation service with other Open Geospatial Consortium (OGC) services — Catalogue Service for the Web (CSW), Transactional Web Feature Service (WFS-T) and Transactional Web Coverage Service (WCS-T). The framework includes an extensible sensor data adapter, an OGC-compliant geospatial SOS, a geospatial catalogue service, a WFS-T, and a WCS-T for the SOS, and a geospatial sensor client. The extensible sensor data adapter finds, stores, and manages sensor data from live sensors, sensor models, and simulation systems. Abstract factory design patterns are used during design and implementation. A sensor observation service compatible with the SWE is designed, following the OGC "core" and "transaction" specifications. It is implemented using Java servlet technology. It can be easily deployed in any Java servlet container and automatically exposed for discovery using Web Service Description Language (WSDL). Interaction sequences between a Sensor Web data consumer and an SOS, between a producer and an SOS, and between an SOS and a CSW are described in detail. The framework has been successfully demonstrated in application scenarios for EO-1 observations, weather observations, and water height gauge observations.
Pragmatic service development and customisation with the CEDA OGC Web Services framework
NASA Astrophysics Data System (ADS)
Pascoe, Stephen; Stephens, Ag; Lowe, Dominic
2010-05-01
The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. We have found this approach useful for exploring evolving standards and implementing protocol extensions to meet the requirements of operational deployments. This paper outlines how COWS is being used to implement customised WMS, WCS, WFS and WPS services in a variety of web applications from experimental prototypes to load-balanced cluster deployments serving 10-100 simultaneous users. In particular we will cover 1) The use of Climate Science Modeling Language (CSML) in complex-feature aware WMS, WCS and WFS services, 2) Extending WMS to support applications with features specific to earth system science and 3) A cluster-enabled Web Processing Service (WPS) supporting asynchronous data processing. The COWS WPS underpins all backend services in the UK Climate Projections User Interface where users can extract, plot and further process outputs from a multi-dimensional probabilistic climate model dataset. The COWS WPS supports cluster job execution, result caching, execution time estimation and user management. The COWS WMS and WCS components drive the project-specific NCEO and QESDI portals developed by the British Atmospheric Data Centre. These portals use CSML as a backend description format and implement features such as multiple WMS layer dimensions and climatology axes that are beyond the scope of general purpose GIS tools and yet vital for atmospheric science applications.
Semantics-enabled service discovery framework in the SIMDAT pharma grid.
Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert
2008-03-01
We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.
Can They Plan to Teach with Web 2.0? Future Teachers' Potential Use of the Emerging Web
ERIC Educational Resources Information Center
Kale, Ugur
2014-01-01
This study examined pre-service teachers' potential use of Web 2.0 technologies for teaching. A coding scheme incorporating the Technological Pedagogical Content Knowledge (TPACK) framework guided the analysis of pre-service teachers' Web 2.0-enhanced learning activity descriptions. The results indicated that while pre-service teachers were able…
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
[Research on tumor information grid framework].
Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing
2013-10-01
In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.
PaaS for web applications with OpenShift Origin
NASA Astrophysics Data System (ADS)
Lossent, A.; Rodriguez Peon, A.; Wagner, A.
2017-10-01
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.
Next generation of weather generators on web service framework
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.
2016-12-01
Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.
A study of an adaptive replication framework for orchestrated composite web services.
Mohamed, Marwa F; Elyamany, Hany F; Nassar, Hamed M
2013-01-01
Replication is considered one of the most important techniques to improve the Quality of Services (QoS) of published Web Services. It has achieved impressive success in managing resource sharing and usage in order to moderate the energy consumed in IT environments. For a robust and successful replication process, attention should be paid to suitable time as well as the constraints and capabilities in which the process runs. The replication process is time-consuming since outsourcing some new replicas into other hosts is lengthy. Furthermore, nowadays, most of the business processes that might be implemented over the Web are composed of multiple Web services working together in two main styles: Orchestration and Choreography. Accomplishing a replication over such business processes is another challenge due to the complexity and flexibility involved. In this paper, we present an adaptive replication framework for regular and orchestrated composite Web services. The suggested framework includes a number of components for detecting unexpected and unhappy events that might occur when consuming the original published web services including failure or overloading. It also includes a specific replication controller to manage the replication process and select the best host that would encapsulate a new replica. In addition, it includes a component for predicting the incoming load in order to decrease the time needed for outsourcing new replicas, enhancing the performance greatly. A simulation environment has been created to measure the performance of the suggested framework. The results indicate that adaptive replication with prediction scenario is the best option for enhancing the performance of the replication process in an online business environment.
A web service framework for astronomical remote observation in Antarctica by using satellite link
NASA Astrophysics Data System (ADS)
Jia, M.-h.; Chen, Y.-q.; Zhang, G.-y.; Jiang, P.; Zhang, H.; Wang, J.
2018-07-01
Many telescopes are deployed in Antarctica as it offers excellent astronomical observation conditions. However, because Antarctica's environment is harsh to humans, remote operation of telescope is necessary for observation. Furthermore, communication to devices in Antarctica through satellite link with low bandwidth and high latency limits the effectiveness of remote observation. This paper introduces a web service framework for remote astronomical observation in Antarctica. The framework is based on Python Tornado. RTS2-HTTPD and REDIS are used as the access interface to the telescope control system in Antarctica. The web service provides real-time updates through WebSocket. To improve user experience and control effectiveness under the poor satellite link condition, an agent server is deployed in the mainland to synchronize the Antarctic server's data and send it to domestic users in China. The agent server will forward the request of domestic users to the Antarctic master server. The web service was deployed and tested on Bright Star Survey Telescope (BSST) in Antarctica. Results show that the service meets the demands of real-time, multiuser remote observation and domestic users have a better experience of remote operation.
Security and Dependability Solutions for Web Services and Workflows
NASA Astrophysics Data System (ADS)
Kokolakis, Spyros; Rizomiliotis, Panagiotis; Benameur, Azzedine; Sinha, Smriti Kumar
In this chapter we present an innovative approach towards the design and application of Security and Dependability (S&D) solutions for Web services and service-based workflows. Recently, several standards have been published that prescribe S&D solutions for Web services, e.g. OASIS WS-Security. However,the application of these solutions in specific contexts has been proven problematic. We propose a new framework for the application of such solutions based on the SERENITY S&D Pattern concept. An S&D Pattern comprises all the necessary information for the implementation, verification, deployment, and active monitoring of an S&D Solution. Thus, system developers may rely on proven solutions that are dynamically deployed and monitored by the Serenity Runtime Framework. Finally, we further extend this approach to cover the case of executable workflows which are realised through the orchestration of Web services.
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface.
Jeliazkova, Nina; Jeliazkov, Vedrin
2011-05-16
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface
2011-01-01
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems. PMID:21575202
Nunes, David; Tran, Thanh-Dien; Raposo, Duarte; Pinto, André; Gomes, André; Silva, Jorge Sá
2012-01-01
As the Internet evolved, social networks (such as Facebook) have bloomed and brought together an astonishing number of users. Mashing up mobile phones and sensors with these social environments enables the creation of people-centric sensing systems which have great potential for expanding our current social networking usage. However, such systems also have many associated technical challenges, such as privacy concerns, activity detection mechanisms or intermittent connectivity, as well as limitations due to the heterogeneity of sensor nodes and networks. Considering the openness of the Web 2.0, good technical solutions for these cases consist of frameworks that expose sensing data and functionalities as common Web-Services. This paper presents our RESTful Web Service-based model for people-centric sensing frameworks, which uses sensors and mobile phones to detect users’ activities and locations, sharing this information amongst the user’s friends within a social networking site. We also present some screenshot results of our experimental prototype. PMID:22438732
Nunes, David; Tran, Thanh-Dien; Raposo, Duarte; Pinto, André; Gomes, André; Silva, Jorge Sá
2012-01-01
As the Internet evolved, social networks (such as Facebook) have bloomed and brought together an astonishing number of users. Mashing up mobile phones and sensors with these social environments enables the creation of people-centric sensing systems which have great potential for expanding our current social networking usage. However, such systems also have many associated technical challenges, such as privacy concerns, activity detection mechanisms or intermittent connectivity, as well as limitations due to the heterogeneity of sensor nodes and networks. Considering the openness of the Web 2.0, good technical solutions for these cases consist of frameworks that expose sensing data and functionalities as common Web-Services. This paper presents our RESTful Web Service-based model for people-centric sensing frameworks, which uses sensors and mobile phones to detect users' activities and locations, sharing this information amongst the user's friends within a social networking site. We also present some screenshot results of our experimental prototype.
NASA Astrophysics Data System (ADS)
Yang, C.; Wong, D. W.; Phillips, T.; Wright, R. A.; Lindsey, S.; Kafatos, M.
2005-12-01
As a teamed partnership of the Center for Earth Observing and Space Research (CEOSR) at George Mason University (GMU), Virginia Department of Transportation (VDOT), Bureau of Transportation Statistics at the Department of Transportation (BTS/DOT), and Intergraph, we established Transportation Framework Data Services using Open Geospatial Consortium (OGC)'s Web Feature Service (WFS) Specification to enable the sharing of transportation data among the federal level with data from BTS/DOT, the state level through VDOT, the industries through Intergraph. CEOSR develops WFS solutions using Intergraph software. Relevant technical documents are also developed and disseminated through the partners. The WFS is integrated with operational geospatial systems at CEOSR and VDOT. CEOSR works with Intergraph on developing WFS solutions and technical documents. GeoMedia WebMap WFS toolkit is used with software and technical support from Intergraph. ESRI ArcIMS WFS connector is used with GMU's campus license of ESRI products. Tested solutions are integrated with framework data service operational systems, including 1) CEOSR's interoperable geospatial information services, FGDC clearinghouse Node, Geospatial One Stop (GOS) portal, and WMS services, 2) VDOT's state transportation data and GIS infrastructure, and 3)BTS/DOT's national transportation data. The project presents: 1) develop and deploy an operational OGC WFS 1.1 interfaces at CEOSR for registering with FGDC/GOS Portal and responding to Web ``POST'' requests for transportation Framework data as listed in Table 1; 2) build the WFS service that can return the data that conform to the drafted ANSI/INCITS L1 Standard (when available) for each identified theme in the format given by OGC Geography Markup Language (GML) Version 3.0 or higher; 3) integrate the OGC WFS with CEOSR's clearinghouse nodes, 4) establish a formal partnership to develop and share WFS-based geospatial interoperability technology among GMU, VDOT, BTS/DOT, and Intergraph; and 5) develop WFS-based solutions and technical documents using the GeoMedia WebMap WFS toolkit. Geospatial Web Feature Service is demonstrated to be more efficient in sharing vector data and supports direct Internet access transportation data. Developed WFS solutions also enhanced the interoperable service provided by CEOSR through the FGDC clearinghouse node and the GOS Portal.
NASA Astrophysics Data System (ADS)
Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir
2010-04-01
This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.
A resource oriented webs service for environmental modeling
NASA Astrophysics Data System (ADS)
Ferencik, Ioan
2013-04-01
Environmental modeling is a largely adopted practice in the study of natural phenomena. Environmental models can be difficult to build and use and thus sharing them within the community is an important aspect. The most common approach to share a model is to expose it as a web service. In practice the interaction with this web service is cumbersome due to lack of standardized contract and the complexity of the model being exposed. In this work we investigate the use of a resource oriented approach in exposing environmental models as web services. We view a model as a layered resource build atop the object concept from Object Oriented Programming, augmented with persistence capabilities provided by an embedded object database to keep track of its state and implementing the four basic principles of resource oriented architectures: addressability, statelessness, representation and uniform interface. For implementation we use exclusively open source software: Django framework, dyBase object oriented database and Python programming language. We developed a generic framework of resources structured into a hierarchy of types and consequently extended this typology with recurses specific to the domain of environmental modeling. To test our web service we used cURL, a robust command-line based web client.
BioSWR – Semantic Web Services Registry for Bioinformatics
Repchevsky, Dmitry; Gelpi, Josep Ll.
2014-01-01
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118
BioSWR--semantic web services registry for bioinformatics.
Repchevsky, Dmitry; Gelpi, Josep Ll
2014-01-01
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
NASA Astrophysics Data System (ADS)
Zhang, Dengrong; Yu, Le
2006-10-01
Abstract-An approach of constructing a data node in spatial information grid (SIG) based on Web Service Resource Framework (WSRF) and Web Service Notification (WSN) is described in this paper. Attentions are paid to construct and implement SIG's resource layer, which is the most important part. A study on this layer find out, it is impossible to require persistent interaction with the clients of the services in common SIG architecture because of inheriting "stateless" and "not persistent" limitations of Web Service. A WSRF/WSN-based data node is designed to hurdle this short comes. Three different access modes are employed to test the availability of this node. Experimental results demonstrate this service node can successfully respond to standard OGC requests and returns specific spatial data in different network environment, also is stateful, dynamic and persistent.
The OGC Sensor Web Enablement framework
NASA Astrophysics Data System (ADS)
Cox, S. J.; Botts, M.
2006-12-01
Sensor observations are at the core of natural sciences. Improvements in data-sharing technologies offer the promise of much greater utilisation of observational data. A key to this is interoperable data standards. The Open Geospatial Consortium's (OGC) Sensor Web Enablement initiative (SWE) is developing open standards for web interfaces for the discovery, exchange and processing of sensor observations, and tasking of sensor systems. The goal is to support the construction of complex sensor applications through real-time composition of service chains from standard components. The framework is based around a suite of standard interfaces, and standard encodings for the message transferred between services. The SWE interfaces include: Sensor Observation Service (SOS)-parameterized observation requests (by observation time, feature of interest, property, sensor); Sensor Planning Service (SPS)-tasking a sensor- system to undertake future observations; Sensor Alert Service (SAS)-subscription to an alert, usually triggered by a sensor result exceeding some value. The interface design generally follows the pattern established in the OGC Web Map Service (WMS) and Web Feature Service (WFS) interfaces, where the interaction between a client and service follows a standard sequence of requests and responses. The first obtains a general description of the service capabilities, followed by obtaining detail required to formulate a data request, and finally a request for a data instance or stream. These may be implemented in a stateless "REST" idiom, or using conventional "web-services" (SOAP) messaging. In a deployed system, the SWE interfaces are supplemented by Catalogue, data (WFS) and portrayal (WMS) services, as well as authentication and rights management. The standard SWE data formats are Observations and Measurements (O&M) which encodes observation metadata and results, Sensor Model Language (SensorML) which describes sensor-systems, Transducer Model Language (TML) which covers low-level data streams, and domain-specific GML Application Schemas for definitions of the target feature types. The SWE framework has been demonstrated in several interoperability testbeds. These were based around emergency management, security, contamination and environmental monitoring scenarios.
Neuroimaging, Genetics, and Clinical Data Sharing in Python Using the CubicWeb Framework
Grigis, Antoine; Goyard, David; Cherbonnier, Robin; Gareau, Thomas; Papadopoulos Orfanos, Dimitri; Chauvat, Nicolas; Di Mascio, Adrien; Schumann, Gunter; Spooren, Will; Murphy, Declan; Frouin, Vincent
2017-01-01
In neurosciences or psychiatry, the emergence of large multi-center population imaging studies raises numerous technological challenges. From distributed data collection, across different institutions and countries, to final data publication service, one must handle the massive, heterogeneous, and complex data from genetics, imaging, demographics, or clinical scores. These data must be both efficiently obtained and downloadable. We present a Python solution, based on the CubicWeb open-source semantic framework, aimed at building population imaging study repositories. In addition, we focus on the tools developed around this framework to overcome the challenges associated with data sharing and collaborative requirements. We describe a set of three highly adaptive web services that transform the CubicWeb framework into a (1) multi-center upload platform, (2) collaborative quality assessment platform, and (3) publication platform endowed with massive-download capabilities. Two major European projects, IMAGEN and EU-AIMS, are currently supported by the described framework. We also present a Python package that enables end users to remotely query neuroimaging, genetics, and clinical data from scripts. PMID:28360851
Neuroimaging, Genetics, and Clinical Data Sharing in Python Using the CubicWeb Framework.
Grigis, Antoine; Goyard, David; Cherbonnier, Robin; Gareau, Thomas; Papadopoulos Orfanos, Dimitri; Chauvat, Nicolas; Di Mascio, Adrien; Schumann, Gunter; Spooren, Will; Murphy, Declan; Frouin, Vincent
2017-01-01
In neurosciences or psychiatry, the emergence of large multi-center population imaging studies raises numerous technological challenges. From distributed data collection, across different institutions and countries, to final data publication service, one must handle the massive, heterogeneous, and complex data from genetics, imaging, demographics, or clinical scores. These data must be both efficiently obtained and downloadable. We present a Python solution, based on the CubicWeb open-source semantic framework, aimed at building population imaging study repositories. In addition, we focus on the tools developed around this framework to overcome the challenges associated with data sharing and collaborative requirements. We describe a set of three highly adaptive web services that transform the CubicWeb framework into a (1) multi-center upload platform, (2) collaborative quality assessment platform, and (3) publication platform endowed with massive-download capabilities. Two major European projects, IMAGEN and EU-AIMS, are currently supported by the described framework. We also present a Python package that enables end users to remotely query neuroimaging, genetics, and clinical data from scripts.
Discovery Mechanisms for the Sensor Web
Jirka, Simon; Bröring, Arne; Stasch, Christoph
2009-01-01
This paper addresses the discovery of sensors within the OGC Sensor Web Enablement framework. Whereas services like the OGC Web Map Service or Web Coverage Service are already well supported through catalogue services, the field of sensor networks and the according discovery mechanisms is still a challenge. The focus within this article will be on the use of existing OGC Sensor Web components for realizing a discovery solution. After discussing the requirements for a Sensor Web discovery mechanism, an approach will be presented that was developed within the EU funded project “OSIRIS”. This solution offers mechanisms to search for sensors, exploit basic semantic relationships, harvest sensor metadata and integrate sensor discovery into already existing catalogues. PMID:22574038
Web Service Architecture Framework for Embedded Devices
ERIC Educational Resources Information Center
Yanzick, Paul David
2009-01-01
The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
Restful API Architecture Based on Laravel Framework
NASA Astrophysics Data System (ADS)
Chen, Xianjun; Ji, Zhoupeng; Fan, Yu; Zhan, Yongsong
2017-10-01
Web service has been an industry standard tech for message communication and integration between heterogeneous systems. RESTFUL API has become mainstream web service development paradigm after SOAP, how to effectively construct RESTFUL API remains a research hotspots. This paper presents a development model of RESTFUL API construction based on PHP language and LARAVEL framework. The key technical problems that need to be solved during the construction of RESTFUL API are discussed, and implementation details based on LARAVEL are given.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
Exploring Pre-Service Teachers' Beliefs about Using Web 2.0 Technologies in K-12 Classroom
ERIC Educational Resources Information Center
Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.
2012-01-01
This qualitative study explored pre-service teachers' behavioral, normative, and control beliefs regarding their intentions to use Web 2.0 technologies in their future classrooms. The Theory of Planned Behavior (TPB) was used as the theoretical framework (Ajzen, 1991) to understand these beliefs and pre-service teachers' intentions for why they…
A Crowdsensing Based Analytical Framework for Perceptional Degradation of OTT Web Browsing.
Li, Ke; Wang, Hai; Xu, Xiaolong; Du, Yu; Liu, Yuansheng; Ahmad, M Omair
2018-05-15
Service perception analysis is crucial for understanding both user experiences and network quality as well as for maintaining and optimizing of mobile networks. Given the rapid development of mobile Internet and over-the-top (OTT) services, the conventional network-centric mode of network operation and maintenance is no longer effective. Therefore, developing an approach to evaluate and optimizing users' service perceptions has become increasingly important. Meanwhile, the development of a new sensing paradigm, mobile crowdsensing (MCS), makes it possible to evaluate and analyze the user's OTT service perception from end-user's point of view other than from the network side. In this paper, the key factors that impact users' end-to-end OTT web browsing service perception are analyzed by monitoring crowdsourced user perceptions. The intrinsic relationships among the key factors and the interactions between key quality indicators (KQI) are evaluated from several perspectives. Moreover, an analytical framework of perceptional degradation and a detailed algorithm are proposed whose goal is to identify the major factors that impact the perceptional degradation of web browsing service as well as their significance of contribution. Finally, a case study is presented to show the effectiveness of the proposed method using a dataset crowdsensed from a large number of smartphone users in a real mobile network. The proposed analytical framework forms a valuable solution for mobile network maintenance and optimization and can help improve web browsing service perception and network quality.
The deegree framework - Spatial Data Infrastructure solution for end-users and developers
NASA Astrophysics Data System (ADS)
Kiehle, Christian; Poth, Andreas
2010-05-01
The open source software framework deegree is a comprehensive implementation of standards as defined by ISO and Open Geospatial Consortium (OGC). It has been developed with two goals in mind: provide a uniform framework for implementing Spatial Data Infrastructures (SDI) and adhering to standards as strictly as possible. Although being open source software (Lesser GNU Public License, LGPL), deegree has been developed with a business model in mind: providing the general building blocks of SDIs without license fees and offer customization, consulting and tailoring by specialized companies. The core of deegree is a comprehensive Java Application Programming Interface (API) offering access to spatial features, analysis, metadata and coordinate reference systems. As a library, deegree can and has been integrated as a core module inside spatial information systems. It is reference implementation for several OGC standards and based on an ISO 19107 geometry model. For end users, deegree is shipped as a web application providing easy-to-set-up components for web mapping and spatial analysis. Since 2000, deegree has been the backbone of many productive SDIs, first and foremost for governmental stakeholders (e.g. Federal Agency for Cartography and Geodesy in Germany, the Ministry of Housing, Spatial Planning and the Environment in the Netherlands, etc.) as well as for research and development projects as an early adoption of standards, drafts and discussion papers. Besides mature standards like Web Map Service, Web Feature Service and Catalogue Services, deegree also implements rather new standards like the Sensor Observation Service, the Web Processing Service and the Web Coordinate Transformation Service (WCTS). While a robust background in standardization (knowledge and implementation) is a must for consultancy, standard-compliant services and encodings alone do not provide solutions for customers. The added value is comprised by a sophisticated set of client software, desktop and web environments. A focus lies on different client solutions for specific standards like the Web Processing Service and the Web Coordinate Transformation Service. On the other hand, complex geoportal solutions comprised of multiple standards and enhanced by components for user management, security and map client functionality show the demanding requirements of real world solutions. The XPlan-GML-standard as defined by the German spatial planing authorities is a good example of how complex real-world requirements can get. XPlan-GML is intended to provide a framework for digital spatial planning documents and requires complex Geography Markup Language (GML) features along with Symbology Encoding (SE), Filter Encoding (FE), Web Map Services (WMS), Web Feature Services (WFS). This complex infrastructure should be used by urban and spatial planners and therefore requires a user-friendly graphical interface hiding the complexity of the underlying infrastructure. Based on challenges faced within customer projects, the importance of easy to use software components is focused. SDI solution should be build upon ISO/OGC-standards, but more important, should be user-friendly and support the users in spatial data management and analysis.
Description of the U.S. Geological Survey Geo Data Portal data integration framework
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.
2012-01-01
The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.
Semantic Search of Web Services
ERIC Educational Resources Information Center
Hao, Ke
2013-01-01
This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…
Academic Research Integration System
ERIC Educational Resources Information Center
Surugiu, Iula; Velicano, Manole
2008-01-01
This paper comprises results concluding the research activity done so far regarding enhanced web services and system integration. The objective of the paper is to define the software architecture for a coherent framework and methodology for enhancing existing web services into an integrated system. This document presents the research work that has…
An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services
NASA Astrophysics Data System (ADS)
Shah, M.; Verma, Y.; Nandakumar, R.
2012-07-01
Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.
Scalable web services for the PSIPRED Protein Analysis Workbench.
Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T
2013-07-01
Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.
Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G
2011-01-01
A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.
The EMBL-EBI bioinformatics web and programmatic tools framework.
Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo
2015-07-01
Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.
2013-09-01
Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
2015-09-01
interface. 15. SUBJECT TERMS smartphone, HDPT, global graph, DSPro, ozone widget framework, distributed common ground system, web service 16. SECURITY...Lee M. Lessons learned with a global graph and ozone widget framework (OWF) testbed. Aberdeen Proving Ground (MD): Army Research Laboratory (US); 2013
A Framework for Transparently Accessing Deep Web Sources
ERIC Educational Resources Information Center
Dragut, Eduard Constantin
2010-01-01
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
Evaluation of Web Accessibility of Consumer Health Information Websites
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant. PMID:14728272
Evaluation of web accessibility of consumer health information websites.
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant.
BP-Broker use-cases in the UncertWeb framework
NASA Astrophysics Data System (ADS)
Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia
2013-04-01
The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract notation (i.e. BPMN 2.0), and delegating the resolution of technical issues (e.g. I/O matching) as much as possible to an external service. The results of the experimented solution indicate that this approach facilitates the integration of environmental model workflows into the standard geospatial Web Services framework (e.g. the GEOSS Common Infrastructure), mitigating its inherent complexity. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
Information Tailoring Enhancements for Large Scale Social Data
2016-03-15
i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks. Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard. Upgraded Scraawl computational framework to increase
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
NASA Astrophysics Data System (ADS)
Qin, Rufu; Lin, Liangzhao
2017-06-01
Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.
Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems
NASA Astrophysics Data System (ADS)
Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn
The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
The Official Web Site of The United States Navy Judge Advocate General's
with the very best in legal services: Legal Services Tax Information Claims » Admiralty Claims VWAP to Service, Committed to Excellence framework Home | About Us | News | Careers | Legal Services
Web-services-based spatial decision support system to facilitate nuclear waste siting
NASA Astrophysics Data System (ADS)
Huang, L. Xinglai; Sheng, Grant
2006-10-01
The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.
NASA Astrophysics Data System (ADS)
Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.
2005-12-01
We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.
Integrating UIMA annotators in a web-based text processing framework.
Chen, Xiang; Arnold, Corey W
2013-01-01
The Unstructured Information Management Architecture (UIMA) [1] framework is a growing platform for natural language processing (NLP) applications. However, such applications may be difficult for non-technical users deploy. This project presents a web-based framework that wraps UIMA-based annotator systems into a graphical user interface for researchers and clinicians, and a web service for developers. An annotator that extracts data elements from lung cancer radiology reports is presented to illustrate the use of the system. Annotation results from the web system can be exported to multiple formats for users to utilize in other aspects of their research and workflow. This project demonstrates the benefits of a lay-user interface for complex NLP applications. Efforts such as this can lead to increased interest and support for NLP work in the clinical domain.
Service-Oriented Architecture for NVO and TeraGrid Computing
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew
2008-01-01
The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.
Architecture of the local spatial data infrastructure for regional climate change research
NASA Astrophysics Data System (ADS)
Titov, Alexander; Gordov, Evgeny
2013-04-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, etc.) are actively used in modeling and analysis of climate change for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset studies in the area of climate and environmental change require a special software support based on SDI approach. A dedicated architecture of the local spatial data infrastructure aiming at regional climate change analysis using modern web mapping technologies is presented. Geoportal is a key element of any SDI, allowing searching of geoinformation resources (datasets and services) using metadata catalogs, producing geospatial data selections by their parameters (data access functionality) as well as managing services and applications of cartographical visualization. It should be noted that due to objective reasons such as big dataset volume, complexity of data models used, syntactic and semantic differences of various datasets, the development of environmental geodata access, processing and visualization services turns out to be quite a complex task. Those circumstances were taken into account while developing architecture of the local spatial data infrastructure as a universal framework providing geodata services. So that, the architecture presented includes: 1. Effective in terms of search, access, retrieval and subsequent statistical processing, model of storing big sets of regional georeferenced data, allowing in particular to store frequently used values (like monthly and annual climate change indices, etc.), thus providing different temporal views of the datasets 2. General architecture of the corresponding software components handling geospatial datasets within the storage model 3. Metadata catalog describing in detail using ISO 19115 and CF-convention standards datasets used in climate researches as a basic element of the spatial data infrastructure as well as its publication according to OGC CSW (Catalog Service Web) specification 4. Computational and mapping web services to work with geospatial datasets based on OWS (OGC Web Services) standards: WMS, WFS, WPS 5. Geoportal as a key element of thematic regional spatial data infrastructure providing also software framework for dedicated web applications development To realize web mapping services Geoserver software is used since it provides natural WPS implementation as a separate software module. To provide geospatial metadata services GeoNetwork Opensource (http://geonetwork-opensource.org) product is planned to be used for it supports ISO 19115/ISO 19119/ISO 19139 metadata standards as well as ISO CSW 2.0 profile for both client and server. To implement thematic applications based on geospatial web services within the framework of local SDI geoportal the following open source software have been selected: 1. OpenLayers JavaScript library, providing basic web mapping functionality for the thin client such as web browser 2. GeoExt/ExtJS JavaScript libraries for building client-side web applications working with geodata services. The web interface developed will be similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. The work is partially supported by RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2.1 and IP 131.
SAS- Semantic Annotation Service for Geoscience resources on the web
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.
2015-12-01
There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.
Combining Domain-driven Design and Mashups for Service Development
NASA Astrophysics Data System (ADS)
Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni
This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.
Integrating geo web services for a user driven exploratory analysis
NASA Astrophysics Data System (ADS)
Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate
2016-04-01
In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.
2009-04-14
This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less
Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.
Aranguren, Mikel Egaña; Wilkinson, Mark D
2015-01-01
Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
Saleh, Mohammed A; Abdul Manaf, Azizah
2015-01-01
The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks.
Saleh, Mohammed A.; Abdul Manaf, Azizah
2015-01-01
The growth of web technology has brought convenience to our life, since it has become the most important communication channel. However, now this merit is threatened by complicated network-based attacks, such as denial of service (DoS) and distributed denial of service (DDoS) attacks. Despite many researchers' efforts, no optimal solution that addresses all sorts of HTTP DoS/DDoS attacks is on offer. Therefore, this research aims to fix this gap by designing an alternative solution called a flexible, collaborative, multilayer, DDoS prevention framework (FCMDPF). The innovative design of the FCMDPF framework handles all aspects of HTTP-based DoS/DDoS attacks through the following three subsequent framework's schemes (layers). Firstly, an outer blocking (OB) scheme blocks attacking IP source if it is listed on the black list table. Secondly, the service traceback oriented architecture (STBOA) scheme is to validate whether the incoming request is launched by a human or by an automated tool. Then, it traces back the true attacking IP source. Thirdly, the flexible advanced entropy based (FAEB) scheme is to eliminate high rate DDoS (HR-DDoS) and flash crowd (FC) attacks. Compared to the previous researches, our framework's design provides an efficient protection for web applications against all sorts of DoS/DDoS attacks. PMID:26065015
Web servers and services for electrostatics calculations with APBS and PDB2PQR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unni, Samir; Huang, Yong; Hanson, Robert M.
APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in currentmore » distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/.« less
Web servers and services for electrostatics calculations with APBS and PDB2PQR
Unni, Samir; Huang, Yong; Hanson, Robert; Tobias, Malcolm; Krishnan, Sriram; Li, Wilfred W.; Nielsen, Jens E.; Baker, Nathan A.
2011-01-01
APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a Web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but it also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in current distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization electrostatics potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/. PMID:21425296
NASA Astrophysics Data System (ADS)
Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Harbeck, Daniel R.; Boroson, Todd; Liu, Wilson; Kotulla, Ralf; Shaw, Richard; Henschel, Robert; Rajagopal, Jayadev; Stobie, Elizabeth; Knezek, Patricia; Martin, R. Pierre; Archbold, Kevin
2014-07-01
The One Degree Imager-Portal, Pipeline, and Archive (ODI-PPA) is a web science gateway that provides astronomers a modern web interface that acts as a single point of access to their data, and rich computational and visualization capabilities. Its goal is to support scientists in handling complex data sets, and to enhance WIYN Observatory's scientific productivity beyond data acquisition on its 3.5m telescope. ODI-PPA is designed, with periodic user feedback, to be a compute archive that has built-in frameworks including: (1) Collections that allow an astronomer to create logical collations of data products intended for publication, further research, instructional purposes, or to execute data processing tasks (2) Image Explorer and Source Explorer, which together enable real-time interactive visual analysis of massive astronomical data products within an HTML5 capable web browser, and overlaid standard catalog and Source Extractor-generated source markers (3) Workflow framework which enables rapid integration of data processing pipelines on an associated compute cluster and users to request such pipelines to be executed on their data via custom user interfaces. ODI-PPA is made up of several light-weight services connected by a message bus; the web portal built using Twitter/Bootstrap, AngularJS and jQuery JavaScript libraries, and backend services written in PHP (using the Zend framework) and Python; it leverages supercomputing and storage resources at Indiana University. ODI-PPA is designed to be reconfigurable for use in other science domains with large and complex datasets, including an ongoing offshoot project for electron microscopy data.
Open semantic annotation of scientific publications using DOMEO.
Ciccarese, Paolo; Ocana, Marco; Clark, Tim
2012-04-24
Our group has developed a useful shared software framework for performing, versioning, sharing and viewing Web annotations of a number of kinds, using an open representation model. The Domeo Annotation Tool was developed in tandem with this open model, the Annotation Ontology (AO). Development of both the Annotation Framework and the open model was driven by requirements of several different types of alpha users, including bench scientists and biomedical curators from university research labs, online scientific communities, publishing and pharmaceutical companies.Several use cases were incrementally implemented by the toolkit. These use cases in biomedical communications include personal note-taking, group document annotation, semantic tagging, claim-evidence-context extraction, reagent tagging, and curation of textmining results from entity extraction algorithms. We report on the Domeo user interface here. Domeo has been deployed in beta release as part of the NIH Neuroscience Information Framework (NIF, http://www.neuinfo.org) and is scheduled for production deployment in the NIF's next full release.Future papers will describe other aspects of this work in detail, including Annotation Framework Services and components for integrating with external textmining services, such as the NCBO Annotator web service, and with other textmining applications using the Apache UIMA framework.
Open semantic annotation of scientific publications using DOMEO
2012-01-01
Background Our group has developed a useful shared software framework for performing, versioning, sharing and viewing Web annotations of a number of kinds, using an open representation model. Methods The Domeo Annotation Tool was developed in tandem with this open model, the Annotation Ontology (AO). Development of both the Annotation Framework and the open model was driven by requirements of several different types of alpha users, including bench scientists and biomedical curators from university research labs, online scientific communities, publishing and pharmaceutical companies. Several use cases were incrementally implemented by the toolkit. These use cases in biomedical communications include personal note-taking, group document annotation, semantic tagging, claim-evidence-context extraction, reagent tagging, and curation of textmining results from entity extraction algorithms. Results We report on the Domeo user interface here. Domeo has been deployed in beta release as part of the NIH Neuroscience Information Framework (NIF, http://www.neuinfo.org) and is scheduled for production deployment in the NIF’s next full release. Future papers will describe other aspects of this work in detail, including Annotation Framework Services and components for integrating with external textmining services, such as the NCBO Annotator web service, and with other textmining applications using the Apache UIMA framework. PMID:22541592
The Open Ed Tech: Never Mind the Edupunks; or, The Great Web 2.0 Swindle
ERIC Educational Resources Information Center
Lamb, Brian; Groom, Jim
2010-01-01
Has the wave of the open web crested? What does "open educational technology" look like, and does it stand for anything? In this article, the authors discuss the rise of open educational technology. The present range of Web 2.0 service providers offers a self-evident strategic technology framework. Without much effort, online teachers and learners…
Presence Management and Merging Presence Information for NGN Services
NASA Astrophysics Data System (ADS)
Schumann, Sebastian; Mikoczy, Eugen; Podhradsky, Pavol; Muruchi, Feliciano; Maruschke, Michael
This paper describes an approach for interworking scenarios between Session Initiation Protocol (SIP) based and non SIP based frameworks (e.g. web services) in case of the presence management service. The characteristics of the concept of a centralized presence management will be introduced.
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
Web scraping technologies in an API world.
Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino
2014-09-01
Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mobasheri, A.; Vahidi, H.; Guan, Q.
2014-04-01
In developing countries, the number of experts and students in geo-informatics domain are very limited compared to experts and students of sciences that could benefit from geo-informatics. In this research, we study the possibility of providing an online education system for teaching geo-informatics at under-graduate level. The hypothesis is that in developing countries, such as Iran, a web-based geo-education system can greatly improve the quantity and quality of knowledge of students in undergraduate level, which is an important step that has to be made in regard of the famous "Geo for all" motto. As a technology for conducting natural and social studies, geo-informatics offers new ways of viewing, representing and analysing information for transformative learning and teaching. Therefore, we design and present a conceptual framework of an education system and elaborate its components as well as the free and open source services and software packages that could be used in this framework for a specific case study: the Web GIS course. The goal of the proposed framework is to develop experimental GI-services in a service-oriented platform for education purposes. Finally, the paper ends with concluding remarks and some tips for future research direction.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis.
Gordon, Paul M K; Sensen, Christoph W
2007-06-18
Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therefore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer.
Seahawk: moving beyond HTML in Web-based bioinformatics analysis
Gordon, Paul MK; Sensen, Christoph W
2007-01-01
Background Traditional HTML interfaces for input to and output from Bioinformatics analysis on the Web are highly variable in style, content and data formats. Combining multiple analyses can therfore be an onerous task for biologists. Semantic Web Services allow automated discovery of conceptual links between remote data analysis servers. A shared data ontology and service discovery/execution framework is particularly attractive in Bioinformatics, where data and services are often both disparate and distributed. Instead of biologists copying, pasting and reformatting data between various Web sites, Semantic Web Service protocols such as MOBY-S hold out the promise of seamlessly integrating multi-step analysis. Results We have developed a program (Seahawk) that allows biologists to intuitively and seamlessly chain together Web Services using a data-centric, rather than the customary service-centric approach. The approach is illustrated with a ferredoxin mutation analysis. Seahawk concentrates on lowering entry barriers for biologists: no prior knowledge of the data ontology, or relevant services is required. In stark contrast to other MOBY-S clients, in Seahawk users simply load Web pages and text files they already work with. Underlying the familiar Web-browser interaction is an XML data engine based on extensible XSLT style sheets, regular expressions, and XPath statements which import existing user data into the MOBY-S format. Conclusion As an easily accessible applet, Seahawk moves beyond standard Web browser interaction, providing mechanisms for the biologist to concentrate on the analytical task rather than on the technical details of data formats and Web forms. As the MOBY-S protocol nears a 1.0 specification, we expect more biologists to adopt these new semantic-oriented ways of doing Web-based analysis, which empower them to do more complicated, ad hoc analysis workflow creation without the assistance of a programmer. PMID:17577405
Benchmark Intelligent Agent Systems for Distributed Battle Tracking
2008-06-20
services in the military and other domains, each entity in the benchmark system exposes a standard set of Web services. Jess ( Java Expert Shell...System) is a rule engine for the Java platform and is an interpreter for the Jess rule language. It is used here to implement policies that maintain...battle tracking system (DBTS), maintaining distributed situation awareness. The Java Agent DEvelopment (JADE) framework is a software framework
Cryptography for a High-Assurance Web-Based Enterprise
2013-10-01
2. Other Cryptographic services - Java provides many cryptographic services through the Java Cryptography Architecture (JCA) framework. The...id=2125 [7]. Miller, Sandra Kay, Fiber Optic Networks Vulnerable to Attack, Information Security Magazine, November 15, 2006, [8]. José R.C
Executing SADI services in Galaxy.
Aranguren, Mikel Egaña; González, Alejandro Rodríguez; Wilkinson, Mark D
2014-01-01
In recent years Galaxy has become a popular workflow management system in bioinformatics, due to its ease of installation, use and extension. The availability of Semantic Web-oriented tools in Galaxy, however, is limited. This is also the case for Semantic Web Services such as those provided by the SADI project, i.e. services that consume and produce RDF. Here we present SADI-Galaxy, a tool generator that deploys selected SADI Services as typical Galaxy tools. SADI-Galaxy is a Galaxy tool generator: through SADI-Galaxy, any SADI-compliant service becomes a Galaxy tool that can participate in other out-standing features of Galaxy such as data storage, history, workflow creation, and publication. Galaxy can also be used to execute and combine SADI services as it does with other Galaxy tools. Finally, we have semi-automated the packing and unpacking of data into RDF such that other Galaxy tools can easily be combined with SADI services, plugging the rich SADI Semantic Web Service environment into the popular Galaxy ecosystem. SADI-Galaxy bridges the gap between Galaxy, an easy to use but "static" workflow system with a wide user-base, and SADI, a sophisticated, semantic, discovery-based framework for Web Services, thus benefiting both user communities.
GeoNetwork powered GI-cat: a geoportal hybrid solution
NASA Astrophysics Data System (ADS)
Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo
2010-05-01
To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.
Jin, Wenquan; Kim, DoHyeun
2018-05-26
The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.
2016-12-01
Hydrologists today have to integrate resources such as data and models, which originate and reside in multiple autonomous and heterogeneous repositories over the Web. Several resource management systems have emerged within geoscience communities for sharing long-tail data, which are collected by individual or small research groups, and long-tail models, which are developed by scientists or small modeling communities. While these systems have increased the availability of resources within geoscience domains, deficiencies remain due to the heterogeneity in the methods, which are used to describe, encode, and publish information about resources over the Web. This heterogeneity limits our ability to access the right information in the right context so that it can be efficiently retrieved and understood without the Hydrologist's mediation. A primary challenge of the Web today is the lack of the semantic interoperability among the massive number of resources, which already exist and are continually being generated at rapid rates. To address this challenge, we have developed a decentralized GeoSemantic (GS) framework, which provides three sets of micro-web services to support (i) semantic annotation of resources, (ii) semantic alignment between the metadata of two resources, and (iii) semantic mediation among Standard Names. Here we present the design of the framework and demonstrate its application for semantic integration between data and models used in the IML-CZO. First we show how the IML-CZO data are annotated using the Semantic Annotation Services. Then we illustrate how the Resource Alignment Services and Knowledge Integration Services are used to create a semantic workflow among TopoFlow model, which is a spatially-distributed hydrologic model and the annotated data. Results of this work are (i) a demonstration of how the GS framework advances the integration of heterogeneous data and models of water-related disciplines by seamless handling of their semantic heterogeneity, (ii) an introduction of new paradigm for reusing existing and new standards as well as tools and models without the need of their implementation in the Cyberinfrastructures of water-related disciplines, and (iii) an investigation of a methodology by which distributed models can be coupled in a workflow using the GS services.
Adding Processing Functionality to the Sensor Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pross, Benjamin; Jirka, Simon; Gräler, Benedikt
2017-04-01
The Sensor Web allows discovering, accessing and tasking different kinds of environmental sensors in the Web, ranging from simple in-situ sensors to remote sensing systems. However, (geo-)processing functionality needs to be applied to integrate data from different sensor sources and to generate higher level information products. Yet, a common standardized approach for processing sensor data in the Sensor Web is still missing and the integration differs from application to application. Standardizing not only the provision of sensor data, but also the processing facilitates sharing and re-use of processing modules, enables reproducibility of processing results, and provides a common way to integrate external scalable processing facilities or legacy software. In this presentation, we provide an overview on on-going research projects that develop concepts for coupling standardized geoprocessing technologies with Sensor Web technologies. At first, different architectures for coupling sensor data services with geoprocessing services are presented. Afterwards, profiles for linear regression and spatio-temporal interpolation of the OGC Web Processing Services that allow consuming sensor data coming from and uploading predictions to Sensor Observation Services are introduced. The profiles are implemented in processing services for the hydrological domain. Finally, we illustrate how the R software can be coupled with existing OGC Sensor Web and Geoprocessing Services and present an example, how a Web app can be built that allows exploring the results of environmental models in an interactive way using the R Shiny framework. All of the software presented is available as Open Source Software.
Ultrabroadband photonic internet: safety aspects
NASA Astrophysics Data System (ADS)
Kalicki, Arkadiusz; Romaniuk, Ryszard
2008-11-01
Web applications became most popular medium in the Internet. Popularity, easiness of web application frameworks together with careless development results in high number of vulnerabilities and attacks. There are several types of attacks possible because of improper input validation. SQL injection is ability to execute arbitrary SQL queries in a database through an existing application. Cross-site scripting is the vulnerability which allows malicious web users to inject code into the web pages viewed by other users. Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains malicious request. Web spam in blogs. There are several techniques to mitigate attacks. Most important are web application strong design, correct input validation, defined data types for each field and parameterized statements in SQL queries. Server hardening with firewall, modern security policies systems and safe web framework interpreter configuration are essential. It is advised to keep proper security level on client side, keep updated software and install personal web firewalls or IDS/IPS systems. Good habits are logging out from services just after finishing work and using even separate web browser for most important sites, like e-banking.
ERIC Educational Resources Information Center
Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul
2014-01-01
This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…
SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering
2013-11-01
IaaS) cloud frameworks including Amazon Web Services and Eucalyptus . For load testing, we used The Grinder [9], a Java load testing framework that...internal Eucalyptus cluster which we could not scale as large as the Amazon environment due to a lack of computation resources. We recreated our
Kinjo, Akira R.; Bekker, Gert-Jan; Suzuki, Hirofumi; Tsuchiya, Yuko; Kawabata, Takeshi; Ikegawa, Yasuyo; Nakamura, Haruki
2017-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org), a member of the worldwide Protein Data Bank (wwPDB), accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins. We herein outline the updated web user interfaces together with RESTful web services and the backend relational database that support the former. To enhance the interoperability of the PDB data, we have previously developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, which is now a wwPDB standard called wwPDB/RDF. We have enhanced the connectivity of the wwPDB/RDF data by incorporating various external data resources. Services for searching, comparing and analyzing the ever-increasing large structures determined by hybrid methods are also described. PMID:27789697
Consuming Web Services: A Yahoo! Newsfeed Reader
ERIC Educational Resources Information Center
Dadashzadeh, Mohammad
2010-01-01
Service Oriented Architecture (SOA) shows demonstrable signs of simplifying software integration. It provides the necessary framework for building applications that can be integrated and can reduce the cost of integration significantly. Organizations are beginning to architect new integration solutions following the SOA approach. As such,…
MAPI: towards the integrated exploitation of bioinformatics Web Services.
Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo
2011-10-27
Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
An automated and integrated framework for dust storm detection based on ogc web processing services
NASA Astrophysics Data System (ADS)
Xiao, F.; Shea, G. Y. K.; Wong, M. S.; Campbell, J.
2014-11-01
Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data and scientific model integration problem by using a framework and scientific workflow approach together. The experimental result shows that this newly automated and integrated framework can be used to give advance near real-time warning of dust storms, for both environmental authorities and public. The methods presented in this paper might be also generalized to other types of Earth system models, leading to improved ease of use and flexibility.
de la Calle, Guillermo; García-Remesal, Miguel; Chiesa, Stefano; de la Iglesia, Diana; Maojo, Victor
2009-10-07
The rapid evolution of Internet technologies and the collaborative approaches that dominate the field have stimulated the development of numerous bioinformatics resources. To address this new framework, several initiatives have tried to organize these services and resources. In this paper, we present the BioInformatics Resource Inventory (BIRI), a new approach for automatically discovering and indexing available public bioinformatics resources using information extracted from the scientific literature. The index generated can be automatically updated by adding additional manuscripts describing new resources. We have developed web services and applications to test and validate our approach. It has not been designed to replace current indexes but to extend their capabilities with richer functionalities. We developed a web service to provide a set of high-level query primitives to access the index. The web service can be used by third-party web services or web-based applications. To test the web service, we created a pilot web application to access a preliminary knowledge base of resources. We tested our tool using an initial set of 400 abstracts. Almost 90% of the resources described in the abstracts were correctly classified. More than 500 descriptions of functionalities were extracted. These experiments suggest the feasibility of our approach for automatically discovering and indexing current and future bioinformatics resources. Given the domain-independent characteristics of this tool, it is currently being applied by the authors in other areas, such as medical nanoinformatics. BIRI is available at http://edelman.dia.fi.upm.es/biri/.
The Climate Data Analytic Services (CDAS) Framework.
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; Duffy, D.
2016-12-01
Faced with unprecedented growth in climate data volume and demand, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute data processing workflows combining common analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted climate data analysis tools (ESMF, CDAT, NCO, etc.). A dynamic caching architecture enables interactive response times. CDAS utilizes Apache Spark for parallelization and a custom array framework for processing huge datasets within limited memory spaces. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using either direct web service calls, a python script, a unix-like shell client, or a javascript-based web application. Client packages in python, scala, or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends and variability, and compare multiple reanalysis datasets.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
Christin, Zachary; Bagstad, Kenneth J.; Verdone, Michael
2016-01-01
Restoring degraded forests and agricultural lands has become a global conservation priority. A growing number of tools can quantify ecosystem service tradeoffs associated with forest restoration. This evolving “tools landscape” presents a dilemma: more tools are available, but selecting appropriate tools has become more challenging. We present a Restoration Ecosystem Service Tool Selector (RESTS) framework that describes key characteristics of 13 ecosystem service assessment tools. Analysts enter information about their decision context, services to be analyzed, and desired outputs. Tools are filtered and presented based on five evaluative criteria: scalability, cost, time requirements, handling of uncertainty, and applicability to benefit-cost analysis. RESTS uses a spreadsheet interface but a web-based interface is planned. Given the rapid evolution of ecosystem services science, RESTS provides an adaptable framework to guide forest restoration decision makers toward tools that can help quantify ecosystem services in support of restoration.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud
NASA Astrophysics Data System (ADS)
Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.
Creating User-Generated Content for Location-Based Learning: An Authoring Framework
ERIC Educational Resources Information Center
FitzGerald, E.
2012-01-01
Two recent emerging trends are that of Web 2.0, where users actively create content and publish it on the Web, and also location awareness, where a digital device utilizes a person's physical location as the context to provide specific services and/or information. This paper examines how these two phenomena can be brought together so that…
Matos, Ely Edison; Campos, Fernanda; Braga, Regina; Palazzi, Daniele
2010-02-01
The amount of information generated by biological research has lead to an intensive use of models. Mathematical and computational modeling needs accurate description to share, reuse and simulate models as formulated by original authors. In this paper, we introduce the Cell Component Ontology (CelO), expressed in OWL-DL. This ontology captures both the structure of a cell model and the properties of functional components. We use this ontology in a Web project (CelOWS) to describe, query and compose CellML models, using semantic web services. It aims to improve reuse and composition of existent components and allow semantic validation of new models.
Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction
Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon
2016-01-01
Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023
Software Framework for Peer Data-Management Services
NASA Technical Reports Server (NTRS)
Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
An Innovation Framework Applied to a Military Cyber Professionals Association
2013-09-01
of innovation generation described in the book , The Innovator’s Way, (IW) was selected as the framework for the skill sets required to establish the...fees nonexistent for service members. 38 Fritjof Capra, The Web of Life (New York: Anchor Books , 1996...distilled in the IW framework. The wisdom found in the pages of the IW book is a product of years of thoughtful contemplation of experiences and
miRMaid: a unified programming interface for microRNA data resources
2010-01-01
Background MicroRNAs (miRNAs) are endogenous small RNAs that play a key role in post-transcriptional regulation of gene expression in animals and plants. The number of known miRNAs has increased rapidly over the years. The current release (version 14.0) of miRBase, the central online repository for miRNA annotation, comprises over 10.000 miRNA precursors from 115 different species. Furthermore, a large number of decentralized online resources are now available, each contributing with important miRNA annotation and information. Results We have developed a software framework, designated here as miRMaid, with the goal of integrating miRNA data resources in a uniform web service interface that can be accessed and queried by researchers and, most importantly, by computers. miRMaid is built around data from miRBase and is designed to follow the official miRBase data releases. It exposes miRBase data as inter-connected web services. Third-party miRNA data resources can be modularly integrated as miRMaid plugins or they can loosely couple with miRMaid as individual entities in the World Wide Web. miRMaid is available as a public web service but is also easily installed as a local application. The software framework is freely available under the LGPL open source license for academic and commercial use. Conclusion miRMaid is an intuitive and modular software platform designed to unify miRBase and independent miRNA data resources. It enables miRNA researchers to computationally address complex questions involving the multitude of miRNA data resources. Furthermore, miRMaid constitutes a basic framework for further programming in which microRNA-interested bioinformaticians can readily develop their own tools and data sources. PMID:20074352
A new web-based framework development for fuzzy multi-criteria group decision-making.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Fuzzy multi-criteria group decision making (FMCGDM) process is usually used when a group of decision-makers faces imprecise data or linguistic variables to solve the problems. However, this process contains many methods that require many time-consuming calculations depending on the number of criteria, alternatives and decision-makers in order to reach the optimal solution. In this study, a web-based FMCGDM framework that offers decision-makers a fast and reliable response service is proposed. The proposed framework includes commonly used tools for multi-criteria decision-making problems such as fuzzy Delphi, fuzzy AHP and fuzzy TOPSIS methods. The integration of these methods enables taking advantages of the strengths and complements each method's weakness. Finally, a case study of location selection for landfill waste in Morocco is performed to demonstrate how this framework can facilitate decision-making process. The results demonstrate that the proposed framework can successfully accomplish the goal of this study.
The Earth Data Analytic Services (EDAS) Framework
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; Duffy, D.
2017-12-01
Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.
SADI, SHARE, and the in silico scientific method
2010-01-01
Background The emergence and uptake of Semantic Web technologies by the Life Sciences provides exciting opportunities for exploring novel ways to conduct in silico science. Web Service Workflows are already becoming first-class objects in “the new way”, and serve as explicit, shareable, referenceable representations of how an experiment was done. In turn, Semantic Web Service projects aim to facilitate workflow construction by biological domain-experts such that workflows can be edited, re-purposed, and re-published by non-informaticians. However the aspects of the scientific method relating to explicit discourse, disagreement, and hypothesis generation have remained relatively impervious to new technologies. Results Here we present SADI and SHARE - a novel Semantic Web Service framework, and a reference implementation of its client libraries. Together, SADI and SHARE allow the semi- or fully-automatic discovery and pipelining of Semantic Web Services in response to ad hoc user queries. Conclusions The semantic behaviours exhibited by SADI and SHARE extend the functionalities provided by Description Logic Reasoners such that novel assertions can be automatically added to a data-set without logical reasoning, but rather by analytical or annotative services. This behaviour might be applied to achieve the “semantification” of those aspects of the in silico scientific method that are not yet supported by Semantic Web technologies. We support this suggestion using an example in the clinical research space. PMID:21210986
3D medical volume reconstruction using web services.
Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter
2008-04-01
We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.
Dmitrieva, Olga; Michalakidis, Georgios; Mason, Aaron; Jones, Simon; Chan, Tom; de Lusignan, Simon
2012-01-01
A new distributed model of health care management is being introduced in England. Family practitioners have new responsibilities for the management of health care budgets and commissioning of services. There are national datasets available about health care providers and the geographical areas they serve. These data could be better used to assist the family practitioner turned health service commissioners. Unfortunately these data are not in a form that is readily usable by these fledgling family commissioning groups. We therefore Web enabled all the national hospital dermatology treatment data in England combining it with locality data to provide a smart commissioning tool for local communities. We used open-source software including the Ruby on Rails Web framework and MySQL. The system has a Web front-end, which uses hypertext markup language cascading style sheets (HTML/CSS) and JavaScript to deliver and present data provided by the database. A combination of advanced caching and schema structures allows for faster data retrieval on every execution. The system provides an intuitive environment for data analysis and processing across a large health system dataset. Web-enablement has enabled data about in patients, day cases and outpatients to be readily grouped, viewed, and linked to other data. The combination of web-enablement, consistent data collection from all providers; readily available locality data; and a registration based primary system enables the creation of data, which can be used to commission dermatology services in small areas. Standardized datasets collected across large health enterprises when web enabled can readily benchmark local services and inform commissioning decisions.
A Formal Semantics for the WS-BPEL Recovery Framework
NASA Astrophysics Data System (ADS)
Dragoni, Nicola; Mazzara, Manuel
While current studies on Web services composition are mostly focused - from the technical viewpoint - on standards and protocols, this work investigates the adoption of formal methods for dependable composition. The Web Services Business Process Execution Language (WS-BPEL) - an OASIS standard widely adopted both in academic and industrial environments - is considered as a touchstone for concrete composition languages and an analysis of its ambiguous Recovery Framework specification is offered. In order to show the use of formal methods, a precise and unambiguous description of its (simplified) mechanisms is provided by means of a conservative extension of the π-calculus. This has to be intended as a well known case study providing methodological arguments for the adoption of formal methods in software specification. The aspect of verification is not the main topic of the paper but some hints are given.
NASA Astrophysics Data System (ADS)
Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.
2014-02-01
As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.
EnviroAtlas: A New Geospatial Tool to Foster Ecosystem Services Science and Resource Management
In this article we present EnviroAtlas, a web-based, open access tool that seeks to meet a range of needs by bringing together environmental, economic and demographic data in an ecosystem services framework. Within EnviroAtlas, there are three primary types of geospatial data: r...
An active monitoring method for flood events
NASA Astrophysics Data System (ADS)
Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya
2018-07-01
Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.
Signell, Richard; Camossi, E.
2016-01-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Signell, Richard P.; Camossi, Elena
2016-05-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
Rafael Moreno-Sanchez
2006-01-01
The aim of this is paper is to provide a conceptual framework for the session: âThe role of web-based Geographic Information Systems in supporting sustainable management.â The concepts of sustainability, sustainable forest management, Web Services, Distributed Geographic Information Systems, interoperability, Open Specifications, and Open Source Software are defined...
ERIC Educational Resources Information Center
Lim, Billy; Hosack, Bryan; Vogt, Paul
2012-01-01
This paper describes a framework for measuring student learning gains and engagement in a Computer Science 1 (CS 1) / Information Systems 1 (IS 1) course. The framework is designed for a CS1/IS1 course as it has been traditionally taught over the years as well as when it is taught using a new pedagogical approach with Web services. It enables the…
MAPI: a software framework for distributed biomedical applications
2013-01-01
Background The amount of web-based resources (databases, tools etc.) in biomedicine has increased, but the integrated usage of those resources is complex due to differences in access protocols and data formats. However, distributed data processing is becoming inevitable in several domains, in particular in biomedicine, where researchers face rapidly increasing data sizes. This big data is difficult to process locally because of the large processing, memory and storage capacity required. Results This manuscript describes a framework, called MAPI, which provides a uniform representation of resources available over the Internet, in particular for Web Services. The framework enhances their interoperability and collaborative use by enabling a uniform and remote access. The framework functionality is organized in modules that can be combined and configured in different ways to fulfil concrete development requirements. Conclusions The framework has been tested in the biomedical application domain where it has been a base for developing several clients that are able to integrate different web resources. The MAPI binaries and documentation are freely available at http://www.bitlab-es.com/mapi under the Creative Commons Attribution-No Derivative Works 2.5 Spain License. The MAPI source code is available by request (GPL v3 license). PMID:23311574
Web service activities at the IRIS DMC to support federated and multidisciplinary access
NASA Astrophysics Data System (ADS)
Trabant, Chad; Ahern, Timothy K.
2013-04-01
At the IRIS Data Management Center (DMC) we have developed a suite of web service interfaces to access our large archive of, primarily seismological, time series data and related metadata. The goals of these web services include providing: a) next-generation and easily used access interfaces for our current users, b) access to data holdings in a form usable for non-seismologists, c) programmatic access to facilitate integration into data processing workflows and d) a foundation for participation in federated data discovery and access systems. To support our current users, our services provide access to the raw time series data and metadata or conversions of the raw data to commonly used formats. Our services also support simple, on-the-fly signal processing options that are common first steps in many workflows. Additionally, high-level data products derived from raw data are available via service interfaces. To support data access by researchers unfamiliar with seismic data we offer conversion of the data to broadly usable formats (e.g. ASCII text) and data processing to convert the data to Earth units. By their very nature, web services are programmatic interfaces. Combined with ubiquitous support for web technologies in programming & scripting languages and support in many computing environments, web services are very well suited for integrating data access into data processing workflows. As programmatic interfaces that can return data in both discipline-specific and broadly usable formats, our services are also well suited for participation in federated and brokered systems either specific to seismology or multidisciplinary. Working within the International Federation of Digital Seismograph Networks, the DMC collaborated on the specification of standardized web service interfaces for use at any seismological data center. These data access interfaces, when supported by multiple data centers, will form a foundation on which to build discovery and access mechanisms for data sets spanning multiple centers. To promote the adoption of these standardized services the DMC has developed portable implementations of the software needed to host these interfaces, minimizing the work required at each data center. Within the COOPEUS project framework, the DMC is working with EU partners to install web services implementations at multiple data centers in Europe.
A framework for integration of scientific applications into the OpenTopography workflow
NASA Astrophysics Data System (ADS)
Nandigam, V.; Crosby, C.; Baru, C.
2012-12-01
The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by virtually extending the OpenTopography service over the various infrastructures running these scientific applications and processing routines. This involves packaging and distributing a customized instance of the Opal toolkit that will wrap the software application as an OPAL-based web service and integrate it into the OpenTopography framework. We plan to make this as automated as possible. A structured specification of service inputs and outputs along with metadata annotations encoded in XML can be utilized to automate the generation of user interfaces, with appropriate tools tips and user help features, and generation of other internal software. The OpenTopography Opal toolkit will also include the customizations that will enable security authentication, authorization and the ability to write application usage and job statistics back to the OpenTopography databases. This usage information could then be reported to the original service providers and used for auditing and performance improvements. This pluggable framework will enable the application developers to continue to work on enhancing their application while making the latest iteration available in a timely manner to the earth sciences community. This will also help us establish an overall framework that other scientific application providers will also be able to use going forward.
Progress toward a Semantic eScience Framework; building on advanced cyberinfrastructure
NASA Astrophysics Data System (ADS)
McGuinness, D. L.; Fox, P. A.; West, P.; Rozell, E.; Zednik, S.; Chang, C.
2010-12-01
The configurable and extensible semantic eScience framework (SESF) has begun development and implementation of several semantic application components. Extensions and improvements to several ontologies have been made based on distinct interdisciplinary use cases ranging from solar physics, to biologicl and chemical oceanography. Importantly, these semantic representations mediate access to a diverse set of existing and emerging cyberinfrastructure. Among the advances are the population of triple stores with web accessible query services. A triple store is akin to a relational data store where the basic stored unit is a subject-predicate-object tuple. Access via a query is provided by the W3 Recommendation language specification SPARQL. Upon this middle tier of semantic cyberinfrastructure, we have developed several forms of semantic faceted search, including provenance-awareness. We report on the rapid advances in semantic technologies and tools and how we are sustaining the software path for the required technical advances as well as the ontology improvements and increased functionality of the semantic applications including how they are integrated into web-based portals (e.g. Drupal) and web services. Lastly, we indicate future work direction and opportunities for collaboration.
NASA Astrophysics Data System (ADS)
Fox, P.; McGuinness, D.; Cinquini, L.; West, P.; Garcia, J.; Zednik, S.; Benedict, J.
2008-05-01
This presentation will demonstrate how users and other data providers can utilize the Virtual Solar-Terrestrial Observatory (VSTO) to find, access and use diverse data holdings from the disciplines of solar, solar-terrestrial and space physics. VSTO provides a web portal, web services and a native applications programming interface for various levels of users. Since these access methods are based on semantic web technologies and refer to the VSTO ontology, users also have the option of taking advantage of value added services when accessing and using the data. We present example of both conventional use of VSTO as well as the advanced semantics use. Finally, we present our future directions for VSTO and semantic data frameworks in general.
NASA Astrophysics Data System (ADS)
Fox, P.
2007-05-01
This presentation will demonstrate how users and other data providers can utilize the Virtual Solar-Terrestrial Observatory (VSTO) to find, access and use diverse data holdings from the disciplines of solar, solar-terrestrial and space physics. VSTO provides a web portal, web services and a native applications programming interface for various levels of users. Since these access methods are based on semantic web technologies and refer to the VSTO ontology, users also have the option of taking advantage of value added services when accessing and using the data. We present example of both conventional use of VSTO as well as the advanced semantics use. Finally, we present our future directions for VSTO and semantic data frameworks in general.
A web service for service composition to aid geospatial modelers
NASA Astrophysics Data System (ADS)
Bigagli, L.; Santoro, M.; Roncella, R.; Mazzetti, P.
2012-04-01
The identification of appropriate mechanisms for process reuse, chaining and composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. In the Earth and Space Sciences, such a facility could primarily enable integrated and interoperable modeling, for what several approaches have been proposed and developed, over the last years. In fact, GEOSS is specifically tasked with the development of the so-called "Model Web". At increasing levels of abstraction and generalization, the initial stove-pipe software tools have evolved to community-wide modeling frameworks, to Component-Based Architecture solution, and, more recently, started to embrace Service-Oriented Architectures technologies, such as the OGC WPS specification and the WS-* stack of W3C standards for service composition. However, so far, the level of abstraction seems too low for implementing the Model Web vision, and far too complex technological aspects must still be addressed by both providers and users, resulting in limited usability and, eventually, difficult uptake. As by the recent ICT trend of resource virtualization, it has been suggested that users in need of a particular processing capability, required by a given modeling workflow, may benefit from outsourcing the composition activities into an external first-class service, according to the Composition as a Service (CaaS) approach. A CaaS system provides the necessary interoperability service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general) in the form of executable workflows. This work introduces the architecture of a CaaS system, as a distributed information system for creating, validating, editing, storing, publishing, and executing geospatial workflows. This way, the users can be freed from the need of a composition infrastructure and alleviated from the technicalities of workflow definitions (type matching, identification of external services endpoints, binding issues, etc.) and focus on their intended application. Moreover, the user may submit an incomplete workflow definition, and leverage CaaS recommendations (that may derive from an aggregated knowledge base of user feedback, underpinned by Web 2.0 technologies) to execute it. This is of particular interest for multidisciplinary scientific contexts, where different communities may benefit of each other knowledge through model chaining. Indeed, the CaaS approach is presented as an attempt to combine the recent advances in service-oriented computing with collaborative research principles, and social network information in general. Arguably, it may be considered a fundamental capability of the Model Web. The CaaS concept is being investigated in several application scenarios identified in the FP7 UncertWeb and EuroGEOSS projects. Key aspects of the described CaaS solution are: it provides a standard WPS interface for invoking Business Processes and allows on the fly recursive compositions of Business Processes into other Composite Processes; it is designed according to the extended SOA (broker-based) and the System-of-Systems approach, to support the reuse and integration of existing resources, in compliance with the GEOSS Model Web architecture. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
From Web accessibility to Web adaptability.
Kelly, Brian; Nevile, Liddy; Sloan, David; Fanou, Sotiris; Ellison, Ruth; Herrod, Lisa
2009-07-01
This article asserts that current approaches to enhance the accessibility of Web resources fail to provide a solid foundation for the development of a robust and future-proofed framework. In particular, they fail to take advantage of new technologies and technological practices. The article introduces a framework for Web adaptability, which encourages the development of Web-based services that can be resilient to the diversity of uses of such services, the target audience, available resources, technical innovations, organisational policies and relevant definitions of 'accessibility'. The article refers to a series of author-focussed approaches to accessibility through which the authors and others have struggled to find ways to promote accessibility for people with disabilities. These approaches depend upon the resource author's determination of the anticipated users' needs and their provision. Through approaches labelled as 1.0, 2.0 and 3.0, the authors have widened their focus to account for contexts and individual differences in target audiences. Now, the authors want to recognise the role of users in determining their engagement with resources (including services). To distinguish this new approach, the term 'adaptability' has been used to replace 'accessibility'; new definitions of accessibility have been adopted, and the authors have reviewed their previous work to clarify how it is relevant to the new approach. Accessibility 1.0 is here characterised as a technical approach in which authors are told how to construct resources for a broadly defined audience. This is known as universal design. Accessibility 2.0 was introduced to point to the need to account for the context in which resources would be used, to help overcome inadequacies identified in the purely technical approach. Accessibility 3.0 moved the focus on users from a homogenised universal definition to recognition of the idiosyncratic needs and preferences of individuals and to cater for them. All of these approaches placed responsibility within the authoring/publishing domain without recognising the role the user might want to play, or the roles that other users in social networks, or even Web services might play. Adaptability shifts the emphasis and calls for greater freedom for the users to facilitate individual accessibility in the open Web environment.
A UIMA wrapper for the NCBO annotator.
Roeder, Christophe; Jonquet, Clement; Shah, Nigam H; Baumgartner, William A; Verspoor, Karin; Hunter, Lawrence
2010-07-15
The Unstructured Information Management Architecture (UIMA) framework and web services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator-an ontology-based annotation service-to make it available as a component in UIMA workflows. This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows.
Comprehensive multiplatform collaboration
NASA Astrophysics Data System (ADS)
Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.
2003-12-01
We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
Deploying and sharing U-Compare workflows as web services.
Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia
2013-02-18
U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.
Deploying and sharing U-Compare workflows as web services
2013-01-01
Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017
Planning and Evaluation of New Academic Library Services by Means of Web-Based Conjoint Analysis
ERIC Educational Resources Information Center
Decker, Reinhold; Hermelbracht, Antonia
2006-01-01
New product development is an omnipresent challenge to modern libraries in the information age. Therefore, we present the design and selected results of a comprehensive research project aiming at the systematic and user-oriented planning of academic library services by means of conjoint analysis. The applicability of the analytical framework used…
Scientific & Intelligence Exascale Visualization Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Money, James H.
SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.
MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.
Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G
2012-12-07
MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.
Emerging Technologies: Web 2.0.
Holt, Claire
2011-03-01
Web 2.0 has brought a change to how we communicate and disseminate information with the use of Twitter, Facebook, YouTube, instant messaging and blogging. This technology is beginning to be used in the health field for public awareness campaigns, emergency health alerts, medical education and remote healthcare services. Australian Health Information Managers will be called upon to reconcile their organisations' policies and procedures regarding the use of Web 2.0 technologies within the existing legal framework of privacy, confidentiality and consent. This article explores various applications of Web 2.0, their benefits and some of their potential legal and ethical implications when implemented in Australia.
The CARMEN software as a service infrastructure.
Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim
2013-01-28
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
Troshin, Peter V; Procter, James B; Sherstnev, Alexander; Barton, Daniel L; Madeira, Fábio; Barton, Geoffrey J
2018-06-01
JABAWS 2.2 is a computational framework that simplifies the deployment of web services for Bioinformatics. In addition to the five multiple sequence alignment (MSA) algorithms in JABAWS 1.0, JABAWS 2.2 includes three additional MSA programs (Clustal Omega, MSAprobs, GLprobs), four protein disorder prediction methods (DisEMBL, IUPred, Ronn, GlobPlot), 18 measures of protein conservation as implemented in AACon, and RNA secondary structure prediction by the RNAalifold program. JABAWS 2.2 can be deployed on a variety of in-house or hosted systems. JABAWS 2.2 web services may be accessed from the Jalview multiple sequence analysis workbench (Version 2.8 and later), as well as directly via the JABAWS command line interface (CLI) client. JABAWS 2.2 can be deployed on a local virtual server as a Virtual Appliance (VA) or simply as a Web Application Archive (WAR) for private use. Improvements in JABAWS 2.2 also include simplified installation and a range of utility tools for usage statistics collection, and web services querying and monitoring. The JABAWS CLI client has been updated to support all the new services and allow integration of JABAWS 2.2 services into conventional scripts. A public JABAWS 2 server has been in production since December 2011 and served over 800 000 analyses for users worldwide. JABAWS 2.2 is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws. g.j.barton@dundee.ac.uk.
Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project
NASA Astrophysics Data System (ADS)
Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.
2011-12-01
The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.
A Mediator-Based Approach to Resolving Interface Heterogeneity of Web Services
NASA Astrophysics Data System (ADS)
Leitner, Philipp; Rosenberg, Florian; Michlmayr, Anton; Huber, Andreas; Dustdar, Schahram
In theory, service-oriented architectures are based on the idea of increasing flexibility in the selection of internal and external business partners using loosely-coupled services. However, in practice this flexibility is limited by the fact that partners need not only to provide the same service, but to do so via virtually the same interface in order to actually be interchangeable easily. Invocation-level mediation may be used to overcome this issue — by using mediation interface differences can be resolved transparently at runtime. In this chapter we discuss the basic ideas of mediation, with a focus on interface-level mediation. We show how interface mediation is integrated into our dynamic Web service invocation framework DAIOS, and present three different mediation strategies, one based on structural message similarity, one based on semantically annotated WSDL, and one which is embedded into the VRESCo SOA runtime, a larger research project with explicit support for service mediation.
NASA Astrophysics Data System (ADS)
Tisdale, M.
2016-12-01
NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying government, private, public and academic communities' driven requirements. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), OGC Web Coverage Services (WCS) and leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams and ASDC are utilizing these services, developing applications using the Web AppBuilder for ArcGIS and ArcGIS API for Javascript, and evaluating restructuring their data production and access scripts within the ArcGIS Python Toolbox framework and Geoprocessing service environment. These capabilities yield a greater usage and exposure of ASDC data holdings and provide improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry.
Modern Data Center Services Supporting Science
NASA Astrophysics Data System (ADS)
Varner, J. D.; Cartwright, J.; McLean, S. J.; Boucher, J.; Neufeld, D.; LaRocque, J.; Fischman, D.; McQuinn, E.; Fugett, C.
2011-12-01
The National Oceanic and Atmospheric Administration's National Geophysical Data Center (NGDC) World Data Center for Geophysics and Marine Geology provides scientific stewardship, products and services for geophysical data, including bathymetry, gravity, magnetics, seismic reflection, data derived from sediment and rock samples, as well as historical natural hazards data (tsunamis, earthquakes, and volcanoes). Although NGDC has long made many of its datasets available through map and other web services, it has now developed a second generation of services to improve the discovery and access to data. These new services use off-the-shelf commercial and open source software, and take advantage of modern JavaScript and web application frameworks. Services are accessible using both RESTful and SOAP queries as well as Open Geospatial Consortium (OGC) standard protocols such as WMS, WFS, WCS, and KML. These new map services (implemented using ESRI ArcGIS Server) are finer-grained than their predecessors, feature improved cartography, and offer dramatic speed improvements through the use of map caches. Using standards-based interfaces allows customers to incorporate the services without having to coordinate with the provider. Providing fine-grained services increases flexibility for customers building custom applications. The Integrated Ocean and Coastal Mapping program and Coastal and Marine Spatial Planning program are two examples of national initiatives that require common data inventories from multiple sources and benefit from these modern data services. NGDC is also consuming its own services, providing a set of new browser-based mapping applications which allow the user to quickly visualize and search for data. One example is a new interactive mapping application to search and display information about historical natural hazards. NGDC continues to increase the amount of its data holdings that are accessible and is augmenting the capabilities with modern web application frameworks such as Groovy and Grails. Data discovery is being improved and simplified by leveraging ISO metadata standards along with ESRI Geoportal Server.
Payao: a community platform for SBML pathway model curation
Matsuoka, Yukiko; Ghosh, Samik; Kikuchi, Norihiro; Kitano, Hiroaki
2010-01-01
Summary: Payao is a community-based, collaborative web service platform for gene-regulatory and biochemical pathway model curation. The system combines Web 2.0 technologies and online model visualization functions to enable a collaborative community to annotate and curate biological models. Payao reads the models in Systems Biology Markup Language format, displays them with CellDesigner, a process diagram editor, which complies with the Systems Biology Graphical Notation, and provides an interface for model enrichment (adding tags and comments to the models) for the access-controlled community members. Availability and implementation: Freely available for model curation service at http://www.payaologue.org. Web site implemented in Seaser Framework 2.0 with S2Flex2, MySQL 5.0 and Tomcat 5.5, with all major browsers supported. Contact: kitano@sbi.jp PMID:20371497
Web-based Traffic Noise Control Support System for Sustainable Transportation
NASA Astrophysics Data System (ADS)
Fan, Lisa; Dai, Liming; Li, Anson
Traffic noise is considered as one of the major pollutions that will affect our communities in the future. This paper presents a framework of web-based traffic noise control support system (WTNCSS) for a sustainable transportation. WTNCSS is to provide the decision makers, engineers and publics a platform to efficiently access the information, and effectively making decisions related to traffic control. The system is based on a Service Oriented Architecture (SOA) which takes the advantages of the convenience of World Wide Web system with the data format of XML. The whole system is divided into different modules such as the prediction module, ontology-based expert module and dynamic online survey module. Each module of the system provides a distinct information service to the decision support center through the HTTP protocol.
User-Centric Secure Cross-Site Interaction Framework for Online Social Networking Services
ERIC Educational Resources Information Center
Ko, Moo Nam
2011-01-01
Social networking service is one of major technological phenomena on Web 2.0. Hundreds of millions of users are posting message, photos, and videos on their profiles and interacting with other users, but the sharing and interaction are limited within the same social networking site. Although users can share some content on a social networking site…
Software architecture and design of the web services facilitating climate model diagnostic analysis
NASA Astrophysics Data System (ADS)
Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.
2015-12-01
Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Lawrence
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and themore » environment relieving developers from implementing such details.« less
Efficient Authorization of Rich Presence Using Secure and Composed Web Services
NASA Astrophysics Data System (ADS)
Li, Li; Chou, Wu
This paper presents an extended Role-Based Access Control (RBAC) model for efficient authorization of rich presence using secure web services composed with an abstract presence data model. Following the information symmetry principle, the standard RBAC model is extended to support context sensitive social relations and cascaded authority. In conjunction with the extended RBAC model, we introduce an extensible presence architecture prototype using WS-Security and WS-Eventing to secure rich presence information exchanges based on PKI certificates. Applications and performance measurements of our presence system are presented to show that the proposed RBAC framework for presence and collaboration is well suited for real-time communication and collaboration.
The JANA calibrations and conditions database API
NASA Astrophysics Data System (ADS)
Lawrence, David
2010-04-01
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.
NASA Astrophysics Data System (ADS)
Fulker, D. W.; Gallagher, J. H. R.
2015-12-01
OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of engineering expertise, via contracts and grants, is the economic engine. Hence sustainability, as needed to address global grand challenges in geoscience, depends on agencies' and others' abilities and willingness to offer grants and let contracts for continually upgrading open-source software from OPeNDAP and others.
NASA Astrophysics Data System (ADS)
Hagemeier-Klose, M.; Wagner, K.
2009-04-01
Flood risk communication with the general public and the population at risk is getting increasingly important for flood risk management, especially as a precautionary measure. This is also underlined by the EU Flood Directive. The flood related authorities therefore have to develop adjusted information tools which meet the demands of different user groups. This article presents the formative evaluation of flood hazard maps and web mapping services according to the specific requirements and needs of the general public using the dynamic-transactional approach as a theoretical framework. The evaluation was done by a mixture of different methods; an analysis of existing tools, a creative workshop with experts and laymen and an online survey. The currently existing flood hazard maps or web mapping services or web GIS still lack a good balance between simplicity and complexity with adequate readability and usability for the public. Well designed and associative maps (e.g. using blue colours for water depths) which can be compared with past local flood events and which can create empathy in viewers, can help to raise awareness, to heighten the activity and knowledge level or can lead to further information seeking. Concerning web mapping services, a linkage between general flood information like flood extents of different scenarios and corresponding water depths and real time information like gauge levels is an important demand by users. Gauge levels of these scenarios are easier to understand than the scientifically correct return periods or annualities. The recently developed Bavarian web mapping service tries to integrate these requirements.
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Technical Reports Server (NTRS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-01-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining .the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning 1001 when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide Field Xray Telescope (WFXT), as part of an end to end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control.
Leveraging Open Standard Interfaces in Accessing and Processing NASA Data Model Outputs
NASA Astrophysics Data System (ADS)
Falke, S. R.; Alameh, N. S.; Hoijarvi, K.; de La Beaujardiere, J.; Bambacus, M. J.
2006-12-01
An objective of NASA's Earth Science Division is to develop advanced information technologies for processing, archiving, accessing, visualizing, and communicating Earth Science data. To this end, NASA and other federal agencies have collaborated with the Open Geospatial Consortium (OGC) to research, develop, and test interoperability specifications within projects and testbeds benefiting the government, industry, and the public. This paper summarizes the results of a recent effort under the auspices of the OGC Web Services testbed phase 4 (OWS-4) to explore standardization approaches for accessing and processing the outputs of NASA models of physical phenomena. Within the OWS-4 context, experiments were designed to leverage the emerging OGC Web Processing Service (WPS) and Web Coverage Service (WCS) specifications to access, filter and manipulate the outputs of the NASA Goddard Earth Observing System (GEOS) and Goddard Chemistry Aerosol Radiation and Transport (GOCART) forecast models. In OWS-4, the intent is to provide the users with more control over the subsets of data that they can extract from the model results as well as over the final portrayal of that data. To meet that goal, experiments have been designed to test the suitability of use of OGC's Web Processing Service (WPS) and Web Coverage Service (WCS) for filtering, processing and portraying the model results (including slices by height or by time), and to identify any enhancements to the specs to meet the desired objectives. This paper summarizes the findings of the experiments highlighting the value of the Web Processing Service in providing standard interfaces for accessing and manipulating model data within spatial and temporal frameworks. The paper also points out the key shortcomings of the WPS especially in terms in comparison with a SOAP/WSDL approach towards solving the same problem.
Semantically-enabled sensor plug & play for the sensor web.
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.
Semantically-Enabled Sensor Plug & Play for the Sensor Web
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033
Eckart, J Dana; Sobral, Bruno W S
2003-01-01
The emergent needs of the bioinformatics community challenge current information systems. The pace of biological data generation far outstrips Moore's Law. Therefore, a gap continues to widen between the capabilities to produce biological (molecular and cell) data sets and the capability to manage and analyze these data sets. As a result, Federal investments in large data set generation produces diminishing returns in terms of the community's capabilities of understanding biology and leveraging that understanding to make scientific and technological advances that improve society. We are building an open framework to address various data management issues including data and tool interoperability, nomenclature and data communication standardization, and database integration. PathPort, short for Pathogen Portal, employs a generic, web-services based framework to deal with some of the problems identified by the bioinformatics community. The motivating research goal of a scalable system to provide data management and analysis for key pathosystems, especially relating to molecular data, has resulted in a generic framework using two major components. On the server-side, we employ web-services. On the client-side, a Java application called ToolBus acts as a client-side "bus" for contacting data and tools and viewing results through a single, consistent user interface.
Cloud Based Web 3d GIS Taiwan Platform
NASA Astrophysics Data System (ADS)
Tsai, W.-F.; Chang, J.-Y.; Yan, S. Y.; Chen, B.
2011-09-01
This article presents the status of the web 3D GIS platform, which has been developed in the National Applied Research Laboratories. The purpose is to develop a global earth observation 3D GIS platform for applications to disaster monitoring and assessment in Taiwan. For quick response to preliminary and detailed assessment after a natural disaster occurs, the web 3D GIS platform is useful to access, transfer, integrate, display and analyze the multi-scale huge data following the international OGC standard. The framework of cloud service for data warehousing management and efficiency enhancement using VMWare is illustrated in this article.
Computers in Small Libraries: Learning Server-Side Scripting
ERIC Educational Resources Information Center
Roberts, Gary
2005-01-01
In this column, the author compares and contrasts the most popular scripting languages that are used to create truly dynamic service-oriented Web sites, building a conceptual framework that be can used as a starting point for specific server-side library projects.
COEUS: “semantic web in a box” for biomedical applications
2012-01-01
Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467
COEUS: "semantic web in a box" for biomedical applications.
Lopes, Pedro; Oliveira, José Luís
2012-12-17
As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations
NASA Astrophysics Data System (ADS)
Patton, E. W.; West, P.; Rozell, E.; Zheng, J.
2010-12-01
There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.
GEMSS: grid-infrastructure for medical service provision.
Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R
2005-01-01
The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.
Shaping the Electronic Library--The UW-Madison Approach.
ERIC Educational Resources Information Center
Dean, Charles W., Ed.; Frazier, Ken; Pope, Nolan F.; Gorman, Peter C.; Dentinger, Sue; Boston, Jeanne; Phillips, Hugh; Daggett, Steven C.; Lundquist, Mitch; McClung, Mark; Riley, Curran; Allan, Craig; Waugh, David
1998-01-01
This special theme section describes the University of Wisconsin-Madison's experience building its Electronic Library. Highlights include integrating resources and services; the administrative framework; the public electronic library, including electronic publishing capability and access to World Wide Web-based and other electronic resources;…
MAGI: a Node.js web service for fast microRNA-Seq analysis in a GPU infrastructure.
Kim, Jihoon; Levy, Eric; Ferbrache, Alex; Stepanowsky, Petra; Farcas, Claudiu; Wang, Shuang; Brunner, Stefan; Bath, Tyler; Wu, Yuan; Ohno-Machado, Lucila
2014-10-01
MAGI is a web service for fast MicroRNA-Seq data analysis in a graphics processing unit (GPU) infrastructure. Using just a browser, users have access to results as web reports in just a few hours->600% end-to-end performance improvement over state of the art. MAGI's salient features are (i) transfer of large input files in native FASTA with Qualities (FASTQ) format through drag-and-drop operations, (ii) rapid prediction of microRNA target genes leveraging parallel computing with GPU devices, (iii) all-in-one analytics with novel feature extraction, statistical test for differential expression and diagnostic plot generation for quality control and (iv) interactive visualization and exploration of results in web reports that are readily available for publication. MAGI relies on the Node.js JavaScript framework, along with NVIDIA CUDA C, PHP: Hypertext Preprocessor (PHP), Perl and R. It is freely available at http://magi.ucsd.edu. © The Author 2014. Published by Oxford University Press.
Performance measurement integrated information framework in e-Manufacturing
NASA Astrophysics Data System (ADS)
Teran, Hilaida; Hernandez, Juan Carlos; Vizán, Antonio; Ríos, José
2014-11-01
The implementation of Internet technologies has led to e-Manufacturing technologies becoming more widely used and to the development of tools for compiling, transforming and synchronising manufacturing data through the Web. In this context, a potential area for development is the extension of virtual manufacturing to performance measurement (PM) processes, a critical area for decision making and implementing improvement actions in manufacturing. This paper proposes a PM information framework to integrate decision support systems in e-Manufacturing. Specifically, the proposed framework offers a homogeneous PM information exchange model that can be applied through decision support in e-Manufacturing environment. Its application improves the necessary interoperability in decision-making data processing tasks. It comprises three sub-systems: a data model, a PM information platform and PM-Web services architecture. A practical example of data exchange for measurement processes in the area of equipment maintenance is shown to demonstrate the utility of the model.
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.
2017-10-01
One of the STAR experiment’s modular Messaging Interface and Reliable Architecture framework (MIRA) integration goals is to provide seamless and automatic connections with the existing control systems. After an initial proof of concept and operation of the MIRA system as a parallel data collection system for online use and real-time monitoring, the STAR Software and Computing group is now working on the integration of Experimental Physics and Industrial Control System (EPICS) with MIRA’s interfaces. This integration goals are to allow functional interoperability and, later on, to replace the existing/legacy Detector Control System components at the service level. In this report, we describe the evolutionary integration process and, as an example, will discuss the EPICS Alarm Handler conversion. We review the complete upgrade procedure starting with the integration of EPICS-originated alarm signals propagation into MIRA, followed by the replacement of the existing operator interface based on Motif Editor and Display Manager (MEDM) with modern portable web-based Alarm Handler interface. To achieve this aim, we have built an EPICS-to-MQTT [8] bridging service, and recreated the functionality of the original Alarm Handler using low-latency web messaging technologies. The integration of EPICS alarm handling into our messaging framework allowed STAR to improve the DCS alarm awareness of existing STAR DAQ and RTS services, which use MIRA as a primary source of experiment control information.
GeneFisher-P: variations of GeneFisher as processes in Bio-jETI
Lamprecht, Anna-Lena; Margaria, Tiziana; Steffen, Bernhard; Sczyrba, Alexander; Hartmeier, Sven; Giegerich, Robert
2008-01-01
Background PCR primer design is an everyday, but not trivial task requiring state-of-the-art software. We describe the popular tool GeneFisher and explain its recent restructuring using workflow techniques. We apply a service-oriented approach to model and implement GeneFisher-P, a process-based version of the GeneFisher web application, as a part of the Bio-jETI platform for service modeling and execution. We show how to introduce a flexible process layer to meet the growing demand for improved user-friendliness and flexibility. Results Within Bio-jETI, we model the process using the jABC framework, a mature model-driven, service-oriented process definition platform. We encapsulate remote legacy tools and integrate web services using jETI, an extension of the jABC for seamless integration of remote resources as basic services, ready to be used in the process. Some of the basic services used by GeneFisher are in fact already provided as individual web services at BiBiServ and can be directly accessed. Others are legacy programs, and are made available to Bio-jETI via the jETI technology. The full power of service-based process orientation is required when more bioinformatics tools, available as web services or via jETI, lead to easy extensions or variations of the basic process. This concerns for instance variations of data retrieval or alignment tools as provided by the European Bioinformatics Institute (EBI). Conclusions The resulting service- and process-oriented GeneFisher-P demonstrates how basic services from heterogeneous sources can be easily orchestrated in the Bio-jETI platform and lead to a flexible family of specialized processes tailored to specific tasks. PMID:18460174
Maity, Maitreya; Dhane, Dhiraj; Mungle, Tushar; Maiti, A K; Chakraborty, Chandan
2017-10-26
Web-enabled e-healthcare system or computer assisted disease diagnosis has a potential to improve the quality and service of conventional healthcare delivery approach. The article describes the design and development of a web-based distributed healthcare management system for medical information and quantitative evaluation of microscopic images using machine learning approach for malaria. In the proposed study, all the health-care centres are connected in a distributed computer network. Each peripheral centre manages its' own health-care service independently and communicates with the central server for remote assistance. The proposed methodology for automated evaluation of parasites includes pre-processing of blood smear microscopic images followed by erythrocytes segmentation. To differentiate between different parasites; a total of 138 quantitative features characterising colour, morphology, and texture are extracted from segmented erythrocytes. An integrated pattern classification framework is designed where four feature selection methods viz. Correlation-based Feature Selection (CFS), Chi-square, Information Gain, and RELIEF are employed with three different classifiers i.e. Naive Bayes', C4.5, and Instance-Based Learning (IB1) individually. Optimal features subset with the best classifier is selected for achieving maximum diagnostic precision. It is seen that the proposed method achieved with 99.2% sensitivity and 99.6% specificity by combining CFS and C4.5 in comparison with other methods. Moreover, the web-based tool is entirely designed using open standards like Java for a web application, ImageJ for image processing, and WEKA for data mining considering its feasibility in rural places with minimal health care facilities.
Automated X-ray and Optical Analysis of the Virtual Observatory and Grid Computing
NASA Astrophysics Data System (ADS)
Ptak, A.; Krughoff, S.; Connolly, A.
2011-07-01
We are developing a system to combine the Web Enabled Source Identification with X-Matching (WESIX) web service, which emphasizes source detection on optical images,with the XAssist program that automates the analysis of X-ray data. XAssist is continuously processing archival X-ray data in several pipelines. We have established a workflow in which FITS images and/or (in the case of X-ray data) an X-ray field can be input to WESIX. Intelligent services return available data (if requested fields have been processed) or submit job requests to a queue to be performed asynchronously. These services will be available via web services (for non-interactive use by Virtual Observatory portals and applications) and through web applications (written in the Django web application framework). We are adding web services for specific XAssist functionality such as determining the exposure and limiting flux for a given position on the sky and extracting spectra and images for a given region. We are improving the queuing system in XAssist to allow for "watch lists" to be specified by users, and when X-ray fields in a user's watch list become publicly available they will be automatically added to the queue. XAssist is being expanded to be used as a survey planning tool when coupled with simulation software, including functionality for NuStar, eRosita, IXO, and the Wide-Field Xray Telescope (WFXT), as part of an end-to-end simulation/analysis system. We are also investigating the possibility of a dedicated iPhone/iPad app for querying pipeline data, requesting processing, and administrative job control. This work was funded by AISRP grant NNG06GE59G.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
The Joy of Playing with Oceanographic Data
NASA Astrophysics Data System (ADS)
Smith, A. T.; Xing, Z.; Armstrong, E. M.; Thompson, C. K.; Huang, T.
2013-12-01
The web is no longer just an after thought. It is no longer just a presentation layer filled with HTML, CSS, JavaScript, Frameworks, 3D, and more. It has become the medium of our communication. It is the database of all databases. It is the computing platform of all platforms. It has transformed the way we do science. Web service is the de facto method for communication between machines over the web. Representational State Transfer (REST) has standardized the way we architect services and their interfaces. In the Earth Science domain, we are familiar with tools and services such as Open-Source Project for Network Data Access Protocol (OPeNDAP), Thematic Realtime Environmental Distributed Data Services (THREDDS), and Live Access Server (LAS). We are also familiar with various data formats such as NetCDF3/4, HDF4/5, GRIB, TIFF, etc. One of the challenges for the Earth Science community is accessing information within these data. There are community-accepted readers that our users can download and install. However, the Application Programming Interface (API) between these readers is not standardized, which leads to non-portable applications. Webification (w10n) is an emerging technology, developed at the Jet Propulsion Laboratory, which exploits the hierarchical nature of a science data artifact to assign a URL to each element within the artifact. (e.g. a granule file). By embracing standards such as JSON, XML, and HTML5 and predictable URL, w10n provides a simple interface that enables tool-builders and researchers to develop portable tools/applications to interact with artifacts of various formats. The NASA Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is the designated data center for observational products relevant to the physical state of the ocean. Over the past year PO.DAAC has been evaluating w10n technology by webifying its archive holdings to provide simplified access to oceanographic science artifacts and as a service to enable future tools and services development. In this talk, we will focus on a w10n-based system called Distributed Oceanographic Webification Service (DOWS) being developed at PO.DAAC to provide a newer and simpler method for working with observational data artifacts. As a continued effort at PO.DAAC to provide better tools and services to visualize our data, the talk will discuss the latest in web-based data visualization tools/frameworks (such as d3.js, Three.js, Leaflet.js, and more) and techniques for working with webified oceanographic science data in both a 2D and 3D web approach.
Design, Implementation and Applications of 3d Web-Services in DB4GEO
NASA Astrophysics Data System (ADS)
Breunig, M.; Kuper, P. V.; Dittrich, A.; Wild, P.; Butwilowski, E.; Al-Doori, M.
2013-09-01
The object-oriented database architecture DB4GeO was originally designed to support sub-surface applications in the geo-sciences. This is reflected in DB4GeO's geometric data model as well as in its import and export functions. Initially, these functions were designed for communication with 3D geological modeling and visualization tools such as GOCAD or MeshLab. However, it soon became clear that DB4GeO was suitable for a much wider range of applications. Therefore it is natural to move away from a standalone solution and to open the access to DB4GeO data by standardized OGC web-services. Though REST and OGC services seem incompatible at first sight, the implementation in DB4GeO shows that OGC-based implementation of web-services may use parts of the DB4GeO-REST implementation. Starting with initial solutions in the history of DB4GeO, this paper will introduce the design, adaptation (i.e. model transformation), and first steps in the implementation of OGC Web Feature (WFS) and Web Processing Services (WPS), as new interfaces to DB4GeO data and operations. Among its capabilities, DB4GeO can provide data in different data formats like GML, GOCAD, or DB3D XML through a WFS, as well as its ability to run operations like a 3D-to-2D service, or mesh-simplification (Progressive Meshes) through a WPS. We then demonstrate, an Android-based mobile 3D augmented reality viewer for DB4GeO that uses the Web Feature Service to visualize 3D geo-database query results. Finally, we explore future research work considering DB4GeO in the framework of the research group "Computer-Aided Collaborative Subway Track Planning in Multi-Scale 3D City and Building Models".
EOforge: Generic Open Framework for Earth Observation Data Processing Systems
2006-09-01
Allow the use of existing interfaces, i.e. MUIS: ESA multimission catalogue for EO products. • Support last EO systems technologies, i.e. MASS ...5. Extensibility and configurability to allow customisation and the inclusion of new functionality. 6. Multi-instrument and multi-mission processing...such as: • MUIS: ESA multimission catalogue for EO products. • MASS (Multi-Application Support Service System): ESA web services technology standard
2010-08-01
the public and for first responders to access disaster information and services provided by government agencies and non- governmental organizations...thereby reducing the performance gap for a single federal disaster-management site. DMIS provides government and non- governmental organizations...NIMS) and Na- tional Response Framework (NRF): a. First responders b. Local governments and agencies c. Regional and federal agencies 2
NASA Astrophysics Data System (ADS)
Jirka, Simon; del Rio, Joaquin; Toma, Daniel; Martinez, Enoc; Delory, Eric; Pearlman, Jay; Rieke, Matthes; Stasch, Christoph
2017-04-01
The rapidly evolving technology for building Web-based (spatial) information infrastructures and Sensor Webs, there are new opportunities to improve the process how ocean data is collected and managed. A central element in this development is the suite of Sensor Web Enablement (SWE) standards specified by the Open Geospatial Consortium (OGC). This framework of standards comprises on the one hand data models as well as formats for measurement data (ISO/OGC Observations and Measurement, O&M) and metadata describing measurement processes and sensors (OGC Sensor Model Language, SensorML). On the other hand the SWE standards comprise (Web service) interface specifications for pull-based access to observation data (OGC Sensor Observation Service, SOS) and for controlling or configuring sensors (OGC Sensor Planning Service, SPS). Also within the European INSPIRE framework the SWE standards play an important role as the SOS is the recommended download service interface for O&M-encoded observation data sets. In the context of the EU-funded Oceans of Tomorrow initiative the NeXOS (Next generation, Cost-effective, Compact, Multifunctional Web Enabled Ocean Sensor Systems Empowering Marine, Maritime and Fisheries Management) project is developing a new generation of in-situ sensors that make use of the SWE standards to facilitate the data publication process and the integration into Web based information infrastructures. This includes the development of a dedicated firmware for instruments and sensor platforms (SEISI, Smart Electronic Interface for Sensors and Instruments) maintained by the Universitat Politècnica de Catalunya (UPC). Among other features, SEISI makes use of OGC SWE standards such OGC-PUCK, to enable a plug-and-play mechanism for sensors based on SensorML encoded metadata. Thus, if a new instrument is attached to a SEISI-based platform, it automatically configures the connection to these instruments, automatically generated data files compliant with the ISO/OGC Observations and Measurements standard and initiates the data transmission into the NeXOS Sensor Web infrastructure. Besides these platform-related developments, NeXOS has realised the full path of data transmission from the sensor to the end user application. The conceptual architecture design is implemented by a series of open source SWE software packages provided by 52°North. This comprises especially different SWE server components (i.e. OGC Sensor Observation Service), tools for data visualisation (e.g. the 52°North Helgoland SOS viewer), and an editor for providing SensorML-based metadata (52°North smle). As a result, NeXOS has demonstrated how the SWE standards help to improve marine observation data collection. Within this presentation, we will present the experiences and findings of the NeXOS project and will provide recommendation for future work directions.
ASCOT: A Collaborative Platform for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Marcos, D.; Connolly, A. J.; Krughoff, K. S.; Smith, I.; Wallace, S. C.
2012-09-01
The digital networks are changing the way that knowledge is created, structured, curated, consumed, archived and referenced. Projects like Wikipedia, Github or Galaxy Zoo have shown the potential of online communities to develop and communicate ideas. ASCOT is a web based framework that facilitates collaboration among astronomers providing a simple way to share, explore, interact and analyze large amounts of data from a broad range of sources available trough the Virtual Observatories (VO). Designed with a strong emphasis on usability, ASCOT takes advantage of the latest generation of web standards and cloud technologies to implement an extendable and customizable stack of web tools and services.
Web-based GIS for spatial pattern detection: application to malaria incidence in Vietnam.
Bui, Thanh Quang; Pham, Hai Minh
2016-01-01
There is a great concern on how to build up an interoperable health information system of public health and health information technology within the development of public information and health surveillance programme. Technically, some major issues remain regarding to health data visualization, spatial processing of health data, health information dissemination, data sharing and the access of local communities to health information. In combination with GIS, we propose a technical framework for web-based health data visualization and spatial analysis. Data was collected from open map-servers and geocoded by open data kit package and data geocoding tools. The Web-based system is designed based on Open-source frameworks and libraries. The system provides Web-based analyst tool for pattern detection through three spatial tests: Nearest neighbour, K function, and Spatial Autocorrelation. The result is a web-based GIS, through which end users can detect disease patterns via selecting area, spatial test parameters and contribute to managers and decision makers. The end users can be health practitioners, educators, local communities, health sector authorities and decision makers. This web-based system allows for the improvement of health related services to public sector users as well as citizens in a secure manner. The combination of spatial statistics and web-based GIS can be a solution that helps empower health practitioners in direct and specific intersectional actions, thus provide for better analysis, control and decision-making.
An ontological knowledge framework for adaptive medical workflow.
Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir
2008-10-01
As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.
Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies
NASA Astrophysics Data System (ADS)
Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.
2007-12-01
The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Semantic Integration for Marine Science Interoperability Using Web Technologies
NASA Astrophysics Data System (ADS)
Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.
2008-12-01
The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example exactMatch, narrowerThan, and subClassOf. VINE can compute inferred mappings based on the given associations. Attributes about each mapping, like comments and a confidence level, can also be included. VINE also supports registering and storing resulting mapping files in the Ontology Registry. The presentation will describe the application of semantic technologies in general, and our planned applications in particular, to solve data management problems in the marine and environmental sciences.
Applications of Dynamic Deployment of Services in Industrial Automation
NASA Astrophysics Data System (ADS)
Candido, Gonçalo; Barata, José; Jammes, François; Colombo, Armando W.
Service-oriented Architecture (SOA) is becoming a de facto paradigm for business and enterprise integration. SOA is expanding into several domains of application envisioning a unified solution suitable across all different layers of an enterprise infrastructure. The application of SOA based on open web standards can significantly enhance the interoperability and openness of those devices. By embedding a dynamical deployment service even into small field de- vices, it would be either possible to allow machine builders to place built- in services and still allow the integrator to deploy on-the-run the services that best fit his current application. This approach allows the developer to keep his own preferred development language, but still deliver a SOA- compliant application. A dynamic deployment service is envisaged as a fundamental framework to support more complex applications, reducing deployment delays, while increasing overall system agility. As use-case scenario, a dynamic deployment service was implemented over DPWS and WS-Management specifications allowing designing and programming an automation application using IEC61131 languages, and deploying these components as web services into devices.
Currently, many policy and management decisions are made without considering the goods and services humans derive from ecosystems and the costs associated with protecting them. This approach is unlikely to be sustainable. Conceptual frameworks provide a tool for capturing, visual...
75 FR 27118 - Additional Designations, Foreign Narcotics Kingpin Designation Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-13
... OFAC's Web site ( http://www.treas.gov/ofac ) or via facsimile through a 24-hour fax-on demand service... establishes a program targeting the activities of significant foreign narcotics traffickers and their organizations on a worldwide basis. It provides a statutory framework for the President to impose sanctions...
Stakeholders can use the tool to accomplish their sustainability goals by: Understanding interactions and feedback loops within human-environmental systems; Identifying areas of the system not previously considered and avoiding unintended consequences; Identifying metrics, indica...
MDWeb and MDMoby: an integrated web-based platform for molecular dynamics simulations.
Hospital, Adam; Andrio, Pau; Fenollosa, Carles; Cicin-Sain, Damjan; Orozco, Modesto; Gelpí, Josep Lluís
2012-05-01
MDWeb and MDMoby constitute a web-based platform to help access to molecular dynamics (MD) in the standard and high-throughput regime. The platform provides tools to prepare systems from PDB structures mimicking the procedures followed by human experts. It provides inputs and can send simulations for three of the most popular MD packages (Amber, NAMD and Gromacs). Tools for analysis of trajectories, either provided by the user or retrieved from our MoDEL database (http://mmb.pcb.ub.es/MoDEL) are also incorporated. The platform has two ways of access, a set of web-services based on the BioMoby framework (MDMoby), programmatically accessible and a web portal (MDWeb). http://mmb.irbbarcelona.org/MDWeb; additional information and methodology details can be found at the web site ( http://mmb.irbbarcelona.org/MDWeb/help.php)
Modeling Adaptable Business Service for Enterprise Collaboration
NASA Astrophysics Data System (ADS)
Boukadi, Khouloud; Vincent, Lucien; Burlat, Patrick
Nowadays, a Service Oriented Architecture (SOA) seems to be one of the most promising paradigms for leveraging enterprise information systems. SOA creates opportunities for enterprises to provide value added service tailored for on demand enterprise collaboration. With the emergence and rapid development of Web services technologies, SOA is being paid increasing attention and has become widespread. In spite of the popularity of SOA, a standardized framework for modeling and implementing business services are still in progress. For the purpose of supporting these service-oriented solutions, we adopt a model driven development approach. This paper outlines the Contextual Service Oriented Modeling and Analysis (CSOMA) methodology and presents UML profiles for the PIM level service-oriented architectural modeling, as well as its corresponding meta-models. The proposed PIM (Platform Independent Model) describes the business SOA at a high level of abstraction regardless of techniques involved in the application employment. In addition, all essential service-specific concerns required for delivering quality and context-aware service are covered. Some of the advantages of this approach are that it is generic and thus not closely allied with Web service technology as well as specifically treating the service adaptability during the design stage.
Integrated web system of geospatial data services for climate research
NASA Astrophysics Data System (ADS)
Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander
2016-04-01
Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.
Framework for ReSTful Web Services in OSGi
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Norris, Jeffrey S.; Powell, Mark W.; Crockett, Thomas M.; Mittman, David S.; Fox, Jason M.; Joswig, Joseph C.; Wallick, Michael N.; Torres, Recaredo J.; Rabe, Kenneth
2009-01-01
Ensemble ReST is a software system that eases the development, deployment, and maintenance of server-side application programs to perform functions that would otherwise be performed by client software. Ensemble ReST takes advantage of the proven disciplines of ReST (Representational State Transfer. ReST leverages the standardized HTTP protocol to enable developers to offer services to a diverse variety of clients: from shell scripts to sophisticated Java application suites
New framework of NGN web-based management system
NASA Astrophysics Data System (ADS)
Nian, Zhou; Jie, Yin; Qian, Mao
2007-11-01
This paper introduces the basic conceptions and key technology of the Ajax and some popular frameworks in the J2EE architecture, try to integrate all the frameworks into a new framework. The developers can develop web applications much more convenient by using this framework and the web application can provide a more friendly and interactive platform to the end users. At last an example is given to explain how to use the new framework to build a web-based management system of the softswitch network.
Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias
2010-01-01
This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.
Hongsermeier, Tonya; Maviglia, Saverio; Tsurikova, Lana; Bogaty, Dan; Rocha, Roberto A; Goldberg, Howard; Meltzer, Seth; Middleton, Blackford
2011-01-01
The goal of the CDS Consortium (CDSC) is to assess, define, demonstrate, and evaluate best practices for knowledge management and clinical decision support in healthcare information technology at scale - across multiple ambulatory care settings and Electronic Health Record technology platforms. In the course of the CDSC research effort, it became evident that a sound legal foundation was required for knowledge sharing and clinical decision support services in order to address data sharing, intellectual property, accountability, and liability concerns. This paper outlines the framework utilized for developing agreements in support of sharing, accessing, and publishing content via the CDSC Knowledge Management Portal as well as an agreement in support of deployment and consumption of CDSC developed web services in the context of a research project under IRB oversight.
Ontology driven integration platform for clinical and translational research
Mirhaji, Parsa; Zhu, Min; Vagnoni, Mattew; Bernstam, Elmer V; Zhang, Jiajie; Smith, Jack W
2009-01-01
Semantic Web technologies offer a promising framework for integration of disparate biomedical data. In this paper we present the semantic information integration platform under development at the Center for Clinical and Translational Sciences (CCTS) at the University of Texas Health Science Center at Houston (UTHSC-H) as part of our Clinical and Translational Science Award (CTSA) program. We utilize the Semantic Web technologies not only for integrating, repurposing and classification of multi-source clinical data, but also to construct a distributed environment for information sharing, and collaboration online. Service Oriented Architecture (SOA) is used to modularize and distribute reusable services in a dynamic and distributed environment. Components of the semantic solution and its overall architecture are described. PMID:19208190
PAVICS: A Platform for the Analysis and Visualization of Climate Science
NASA Astrophysics Data System (ADS)
Gauvin St-Denis, B.; Landry, T.; Huard, D. B.; Byrns, D.; Chaumont, D.; Foucher, S.
2016-12-01
Climate service providers are boundary organizations working at the interface of climate science research and users of climate information. Users include academics in other disciplines looking for credible, customized future climate scenarios, government planners, resource managers, asset owners, as well as service utilities. These users are looking for relevant information regarding the impacts of climate change as well as informing decisions regarding adaptation options. As climate change concerns become mainstream, the pressure on climate service providers to deliver tailored, high quality information in a timely manner increases rapidly. To meet this growing demand, Ouranos, a climate service center located in Montreal, is collaborating with the Centre de recherche informatique de Montreal (CRIM) to develop a climate data analysis web-based platform interacting with RESTful services covering data access and retrieval, geospatial analysis, bias correction, distributed climate indicator computing and results visualization. The project, financed by CANARIE, relies on the experience of the UV-CDAT and ESGF-CWT teams, as well as on the Birdhouse framework developed by the German Climate Research Center (DKRZ) and French IPSL. Climate data is accessed through OPEnDAP, while computations are carried through WPS. Regions such as watersheds or user-defined polygons, used as spatial selections for computations, are managed by GeoServer, also providing WMS, WFS and WPS capabilities. The services are hosted on independent servers communicating by high throughput network. Deployment, maintenance and collaboration with other development teams are eased by the use of Docker and OpenStack VMs. Web-based tools are developed with modern web frameworks such as React-Redux, OpenLayers 3, Cesium and Plotly. Although the main objective of the project is to build a functioning, usable data analysis pipeline within two years, time is also devoted to explore emerging technologies and assess their potential. For instance, sandbox environments will store climate data in HDFS, process it with Apache Spark and allow interaction through Jupyter Notebooks. Data streaming of observational data with OpenGL and Cesium is also considered.
Services for Emodnet-Chemistry Data Products
NASA Astrophysics Data System (ADS)
Santinelli, Giorgio; Hendriksen, Gerrit; Barth, Alexander
2016-04-01
In the framework of Emodnet Chemistry lot, data products from regional leaders were made available in order to transform information into a database. This has been done by using functions and scripts, reading so-called enriched ODV files and inserting data directly into a cloud relational geodatabase. The main table is the one of observations which contains the main data and meta-data associated with the enriched ODV files. A particular implementation in data loading is used in order to improve on-the-fly computational speed. Data from Baltic Sea, North Sea, Mediterrean, Black Sea and part of the Atlantic region has been entered into the geodatabase, and consequently being instantly available from the OceanBrowser Emodnet portal. Furthermore, Deltares has developed an application that provides additional visualisation services for the aggregated and validated data collections. The visualisations are produced by making use of part of the OpenEarthTool stack (http://www.openearth.eu), by the integration of Web Feature Services and by the implementation of Web Processing Services. The goal is the generation of server-side plots of timeseries, profiles, timeprofiles and maps of selected parameters from data sets of selected stations. Regional data collections are retrieved using Emodnet Chemistry cloud relational geo-database. The spatial resolution in time and the intensity of data availability for selected parameters is shown using Web Service requests via the OceanBrowser Emodnet Web portal. OceanBrowser also shows station reference codes, which are used to establish a link for additional metadata, further data shopping and download.
Towards an e-Health Cloud Solution for Remote Regions at Bahia-Brazil.
Sarinho, V T; Mota, A O; Silva, E P
2017-12-19
This paper presents CloudMedic, an e-Health Cloud solution that manages health care services in remote regions of Bahia-Brazil. For that, six main modules: Clinic, Hospital, Supply, Administrative, Billing and Health Business Intelligence, were developed to control the health flow among health actors at health institutions. They provided database model and procedures for health business rules, a standard gateway for data maintenance between web views and database layer, and a multi-front-end framework based on web views and web commands configurations. These resources were used by 2042 health actors in 261 health posts covering health demands from 118 municipalities at Bahia state. They also managed approximately 2.4 million health service 'orders and approximately 13.5 million health exams for more than 1.3 million registered patients. As a result, a collection of health functionalities available in a cloud infrastructure was successfully developed, deployed and validated in more than 28% of Bahia municipalities. A viable e-Health Cloud solution that, despite municipality limitations in remote regions, decentralized and improved the access to health care services at Bahia state.
A Prototype Web-based system for GOES-R Space Weather Data
NASA Astrophysics Data System (ADS)
Sundaravel, A.; Wilkinson, D. C.
2010-12-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application framework based on the Groovy language. We are also making use of the THREDDS (Thematic Realtime Environmental Distributed Data Services) server to publish and access the NetCDF files. We have completed developing software tools to generate NetCDF and CSV data files and also tools to translate NetCDF to CSV. The current phase of the project involves in designing and developing the web interface.
Abstracting application deployment on Cloud infrastructures
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.
2017-10-01
Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.
Big Data Discovery and Access Services through NOAA OneStop
NASA Astrophysics Data System (ADS)
Casey, K. S.; Neufeld, D.; Ritchey, N. A.; Relph, J.; Fischman, D.; Baldwin, R.
2017-12-01
The NOAA OneStop Project was created as a pathfinder effort to to improve the discovery of, access to, and usability of NOAA's vast and diverse collection of big data. OneStop is led by the NOAA/NESDIS National Centers for Environmental Information (NCEI), and is seen as a key NESDIS contribution to NOAA's open data and data stewardship efforts. OneStop consists of an entire framework of services, from storage and interoperable access services at the base, through metadata and catalog services in the middle, to a modern user interface experience at the top. Importantly, it is an open framework where external tools and services can connect at whichever level is most appropriate. Since the beta release of the OneStop user interface at the 2016 Fall AGU meeting, significant progress has been made improving and modernizing many NOAA data collections to optimize their use within the framework. In addition, OneStop has made progress implementing robust metadata management and catalog systems at the collection and granule level and improving the user experience with the web interface. This progress will be summarized and the results of extensive user testing including professional usability studies will be reviewed. Key big data technologies supporting the framework will be presented and a community input sought on the future directions of the OneStop Project.
NASA Astrophysics Data System (ADS)
Mantovani, Simone; Barboni, Damiano; Natali, Stefano; Evans, Ben; Steer, Adam; Hogan, Patrik; Baumann, Peter
2017-04-01
Globally, billions of dollars are invested annually in Earth observations that support public services, commercial activity, and scientific inquiry. The Common Data Framework [1] for Earth Observation data summarises the current standards for the international community to adopt a common approach so that this significant data can be readily accessible. Concurrently, the "Copernicus Cooperation Arrangement" between the European Commission and the Australian Government is just one in a number of recent agreements signed to facilitate Satellite Earth Observation data sharing among the users' communities. The typical approach implemented in these initiatives is the establishment of a regional data access hub managed by the regional entity to collect data at full scale or over the local region, improve access services and provide high-performance environment in which all the data can be analysed. Furthermore, a number of datacube-aware platforms and services have emerged that enable a new collaborative approach for analysing the vast quantities of satellite imagery and other Earth Observations, making it quicker and easier to explore a time series of image data. In this context, the H2020-funded EarthServer2 project brings together multiple organisations in Europe, Australia and United States to allow federated data holdings to be analysed using web-based access to petabytes of multidimensional geospatial datasets. The aim is to create and ensure that these large spatial data sources can be accessed based on OGC standards, namely Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) that provide efficient&timely retrieval of large volumes of geospatial data as well as on-the-fly processing. In this study, we provide an overview of the existing European Space Agency and Australian Geoscience Landsat datacubes, how the regional datacube structures differ, how interoperability is enabled through standards, and finally how the datacubes can be visualized on a virtual globe (NASA - ESA WebWorldWind) based on a WC(P)S query via any standard internet browser. The current study is co-financed by the European Space Agency under the MaaS project (ESRIN Contract No. 4000114186/15/I-LG) and the European Union's Horizon 2020 research and innovation programme under the EarthServer-2 project (Grant Agreement No. 654367) [1] Common framework for Earth-Observation data, March 23, 2016 (https://www.whitehouse.gov/sites/default/files/microsites/ostp/common_framework_for_earth_observation_data.pdf)
A secure and easy-to-implement web-based communication framework for caregiving robot teams
NASA Astrophysics Data System (ADS)
Tuna, G.; Daş, R.; Tuna, A.; Örenbaş, H.; Baykara, M.; Gülez, K.
2016-03-01
In recent years, robots have started to become more commonplace in our lives, from factory floors to museums, festivals and shows. They have started to change how we work and play. With an increase in the population of the elderly, they have also been started to be used for caregiving services, and hence many countries have been investing in the robot development. The advancements in robotics and wireless communications has led to the emergence of autonomous caregiving robot teams which cooperate to accomplish a set of tasks assigned by human operators. Although wireless communications and devices are flexible and convenient, they are vulnerable to many risks compared to traditional wired networks. Since robots with wireless communication capability transmit all data types, including sensory, coordination, and control, through radio frequencies, they are open to intruders and attackers unless protected and their openness may lead to many security issues such as data theft, passive listening, and service interruption. In this paper, a secure web-based communication framework is proposed to address potential security threats due to wireless communication in robot-robot and human-robot interaction. The proposed framework is simple and practical, and can be used by caregiving robot teams in the exchange of sensory data as well as coordination and control data.
First experience with the new .cern Top Level Domain
NASA Astrophysics Data System (ADS)
Alvarez, E.; Malo de Molina, M.; Salwerowicz, M.; Silva De Sousa, B.; Smith, T.; Wagner, A.
2017-10-01
In October 2015, CERN’s core website has been moved to a new address, http://home.cern, marking the launch of the brand new top-level domain .cern. In combination with a formal governance and registration policy, the IT infrastructure needed to be extended to accommodate the hosting of Web sites in this new top level domain. We will present the technical implementation in the framework of the CERN Web Services that allows to provide virtual hosting, a reverse proxy solution and that also includes the provisioning of SSL server certificates for secure communications.
Connecting long-tail scientists with big data centers using SaaS
NASA Astrophysics Data System (ADS)
Percivall, G. S.; Bermudez, L. E.
2012-12-01
Big data centers and long tail scientists represent two extremes in the geoscience research community. Interoperability and inter-use based on software-as-a-service (SaaS) increases access to big data holdings by this underserved community of scientists. Large, institutional data centers have long been recognized as vital resources in the geoscience community. Permanent data archiving and dissemination centers provide "access to the data and (are) a critical source of people who have experience in the use of the data and can provide advice and counsel for new applications." [NRC] The "long-tail of science" is the geoscience researchers that work separate from institutional data centers [Heidorn]. Long-tail scientists need to be efficient consumers of data from large, institutional data centers. Discussions in NSF EarthCube capture the challenges: "Like the vast majority of NSF-funded researchers, Alice (a long-tail scientist) works with limited resources. In the absence of suitable expertise and infrastructure, the apparently simple task that she assigns to her graduate student becomes an information discovery and management nightmare. Downloading and transforming datasets takes weeks." [Foster, et.al.] The long-tail metaphor points to methods to bridge the gap, i.e., the Web. A decade ago, OGC began building a geospatial information space using open, web standards for geoprocessing [ORM]. Recently, [Foster, et.al.] accurately observed that "by adopting, adapting, and applying semantic web and SaaS technologies, we can make the use of geoscience data as easy and convenient as consumption of online media." SaaS places web services into Cloud Computing. SaaS for geospatial is emerging rapidly building on the first-generation geospatial web, e.g., OGC Web Coverage Service [WCS] and the Data Access Protocol [DAP]. Several recent examples show progress in applying SaaS to geosciences: - NASA's Earth Data Coherent Web has a goal to improve science user experience using Web Services (e.g. W*S, SOAP, RESTful) to reduce barriers to using EOSDIS data [ECW]. - NASA's LANCE provides direct access to vast amounts of satellite data using the OGC Web Map Tile Service (WMTS). - NOAA's Unified Access Framework for Gridded Data (UAF Grid) is a web service based capability for direct access to a variety of datasets using netCDF, OPeNDAP, THREDDS, WMS and WCS. [UAF] Tools to access SaaS's are many and varied: some proprietary, others open source; some run in browsers, others are stand-alone applications. What's required is interoperability using web interfaces offered by the data centers. NOAA's UAF service stack supports Matlab, ArcGIS, Ferret, GrADS, Google Earth, IDV, LAS. Any SaaS that offers OGC Web Services (WMS, WFS, WCS) can be accessed by scores of clients [OGC]. While there has been much progress in the recent year toward offering web services for the long-tail of scientists, more needs to be done. Web services offer data access but more than access is needed for inter-use of data, e.g. defining data schemas that allow for data fusion, addressing coordinate systems, spatial geometry, and semantics for observations. Connecting long-tail scientists with large, data centers using SaaS and, in the future, semantic web, will address this large and currently underserved user community.
Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd
2014-01-01
Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.
Making decisions to increase community or regional sustainability requires a comprehensive understanding of the linkages between environmental, social, and economic systems. We present a visualization tool that can improve decision processes by enhancing understanding of system c...
An Ontology Infrastructure for an E-Learning Scenario
ERIC Educational Resources Information Center
Guo, Wen-Ying; Chen, De-Ren
2007-01-01
Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…
InterMine Webservices for Phytozome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Joseph; Hayes, David; Goodstein, David
2014-01-10
A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data
iDEAS: A web-based system for dry eye assessment.
Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G
2016-07-01
Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
New Generation Sensor Web Enablement
Bröring, Arne; Echterhoff, Johannes; Jirka, Simon; Simonis, Ingo; Everding, Thomas; Stasch, Christoph; Liang, Steve; Lemmens, Rob
2011-01-01
Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. PMID:22163760
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643
Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.
Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.
Design and realization of intelligent tourism service system based on voice interaction
NASA Astrophysics Data System (ADS)
Hu, Lei-di; Long, Yi; Qian, Cheng-yang; Zhang, Ling; Lv, Guo-nian
2008-10-01
Voice technology is one of the important contents to improve the intelligence and humanization of tourism service system. Combining voice technology, the paper concentrates on application needs and the composition of system to present an overall intelligent tourism service system's framework consisting of presentation layer, Web services layer, and tourism application service layer. On the basis, the paper further elaborated the implementation of the system and its key technologies, including intelligent voice interactive technology, seamless integration technology of multiple data sources, location-perception-based guides' services technology, and tourism safety control technology. Finally, according to the situation of Nanjing tourism, a prototype of Tourism Services System is realized.
An Interoperable Architecture for Air Pollution Early Warning System Based on Sensor Web
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Zahmatkesh, H.; Saber, M.; Ghazi khanlou, H. J.
2013-09-01
Environmental monitoring systems deal with time-sensitive issues which require quick responses in emergency situations. Handling the sensor observations in near real-time and obtaining valuable information is challenging issues in these systems from a technical and scientific point of view. The ever-increasing population growth in urban areas has caused certain problems in developing countries, which has direct or indirect impact on human life. One of applicable solution for controlling and managing air quality by considering real time and update air quality information gathered by spatially distributed sensors in mega cities, using sensor web technology for developing monitoring and early warning systems. Urban air quality monitoring systems using functionalities of geospatial information system as a platform for analysing, processing, and visualization of data in combination with Sensor Web for supporting decision support systems in disaster management and emergency situations. This system uses Sensor Web Enablement (SWE) framework of the Open Geospatial Consortium (OGC), which offers a standard framework that allows the integration of sensors and sensor data into spatial data infrastructures. SWE framework introduces standards for services to access sensor data and discover events from sensor data streams as well as definition set of standards for the description of sensors and the encoding of measurements. The presented system provides capabilities to collect, transfer, share, process air quality sensor data and disseminate air quality status in real-time. It is possible to overcome interoperability challenges by using standard framework. In a routine scenario, air quality data measured by in-situ sensors are communicated to central station where data is analysed and processed. The extracted air quality status is processed for discovering emergency situations, and if necessary air quality reports are sent to the authorities. This research proposed an architecture to represent how integrate air quality sensor data stream into geospatial data infrastructure to present an interoperable air quality monitoring system for supporting disaster management systems by real time information. Developed system tested on Tehran air pollution sensors for calculating Air Quality Index (AQI) for CO pollutant and subsequently notifying registered users in emergency cases by sending warning E-mails. Air quality monitoring portal used to retrieving and visualize sensor observation through interoperable framework. This system provides capabilities to retrieve SOS observation using WPS in a cascaded service chaining pattern for monitoring trend of timely sensor observation.
Comparison of Physics Frameworks for WebGL-Based Game Engine
NASA Astrophysics Data System (ADS)
Yogya, Resa; Kosala, Raymond
2014-03-01
Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.
The Live Access Server - A Web-Services Framework for Earth Science Data
NASA Astrophysics Data System (ADS)
Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.
2005-12-01
The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is important to understand what we hope to gain. Specifically we would like to make it even easier to add new output products into our core system based on the Ferret analysis and visualization package. By carefully factoring the tasks needed to create a product we will be able to create new products simply by adding a description of the product into the configuration and by writing the Ferret script needed to create the product. No code will need to be added to the Product Server to bring the new product on-line. The new architecture should be faster at extracting and processing configuration information needed to address each request. Finally, the new Product Server architecture should make it even easier to pass specialized configuration information to the Product Server to deal with unanticipated special data structures or processing requirements.
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi
Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.
Leveraging the NLM map from SNOMED CT to ICD-10-CM to facilitate adoption of ICD-10-CM.
Cartagena, F Phil; Schaeffer, Molly; Rifai, Dorothy; Doroshenko, Victoria; Goldberg, Howard S
2015-05-01
Develop and test web services to retrieve and identify the most precise ICD-10-CM code(s) for a given clinical encounter. Facilitate creation of user interfaces that 1) provide an initial shortlist of candidate codes, ideally visible on a single screen; and 2) enable code refinement. To satisfy our high-level use cases, the analysis and design process involved reviewing available maps and crosswalks, designing the rule adjudication framework, determining necessary metadata, retrieving related codes, and iteratively improving the code refinement algorithm. The Partners ICD-10-CM Search and Mapping Services (PI-10 Services) are SOAP web services written using Microsoft's.NET 4.0 Framework, Windows Communications Framework, and SQL Server 2012. The services cover 96% of the Partners problem list subset of SNOMED CT codes that map to ICD-10-CM codes and can return up to 76% of the 69,823 billable ICD-10-CM codes prior to creation of custom mapping rules. We consider ways to increase 1) the coverage ratio of the Partners problem list subset of SNOMED CT codes and 2) the upper bound of returnable ICD-10-CM codes by creating custom mapping rules. Future work will investigate the utility of the transitive closure of SNOMED CT codes and other methods to assist in custom rule creation and, ultimately, to provide more complete coverage of ICD-10-CM codes. ICD-10-CM will be easier for clinicians to manage if applications display short lists of candidate codes from which clinicians can subsequently select a code for further refinement. The PI-10 Services support ICD-10 migration by implementing this paradigm and enabling users to consistently and accurately find the best ICD-10-CM code(s) without translation from ICD-9-CM. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Vahidi, H.; Mobasheri, A.; Alimardani, M.; Guan, Q.; Bakillah, M.
2014-04-01
Providing early mental health services during disaster is a great challenge in the disaster response phase. Lack of access to adequate mental-health professionals in the early stages of large-scale disasters dramatically influences the trend of a successful mental health aid. In this paper, a conceptual framework has been suggested for adopting cellphone-type tele-operated android robots in the early stages of disasters for providing the early mental health services for disaster survivors by developing a locationbased and participatory approach. The techniques of enabling GI-services in a Peer-to-Peer (P2P) environment were studied to overcome the limitations of current centralized services. Therefore, the aim of this research study is to add more flexibility and autonomy to GI web services (WMS, WFS, WPS, etc.) and alleviate to some degree the inherent limitations of these centralized systems. A P2P system Architecture is presented for the location-based service using minimalistic tele-operated android robots, and some key techniques of implementing this service using BestPeer were studied for developing this framework.
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-02-01
Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.
Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel
2015-01-01
Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by eTOXlab (web services, VM, object-oriented programming) provide an elegant solution to common practical issues; the system can be installed easily in heterogeneous environments and integrates well with other software. Moreover, the system provides a simple and safe solution for building models with confidential structures that can be shared without disclosing sensitive information.
Tools for Integrating Data Access from the IRIS DMC into Research Workflows
NASA Astrophysics Data System (ADS)
Reyes, C. G.; Suleiman, Y. Y.; Trabant, C.; Karstens, R.; Weertman, B. R.
2012-12-01
Web service interfaces at the IRIS Data Management Center (DMC) provide access to a vast archive of seismological and related geophysical data. These interfaces are designed to easily incorporate data access into data processing workflows. Examples of data that may be accessed include: time series data, related metadata, and earthquake information. The DMC has developed command line scripts, MATLAB® interfaces and a Java library to support a wide variety of data access needs. Users of these interfaces do not need to concern themselves with web service details, networking, or even (in most cases) data conversion. Fetch scripts allow access to the DMC archive and are a comfortable fit for command line users. These scripts are written in Perl and are well suited for automation and integration into existing workflows on most operating systems. For metdata and event information, the Fetch scripts even parse the returned data into simple text summaries. The IRIS Java Web Services Library (IRIS-WS Library) allows Java developers the ability to create programs that access the DMC archives seamlessly. By returning the data and information as native Java objects the Library insulates the developer from data formats, network programming and web service details. The MATLAB interfaces leverage this library to allow users access to the DMC archive directly from within MATLAB (r2009b or newer), returning data into variables for immediate use. Data users and research groups are developing other toolkits that use the DMC's web services. Notably, the ObsPy framework developed at LMU Munich is a Python Toolbox that allows seamless access to data and information via the DMC services. Another example is the MATLAB-based GISMO and Waveform Suite developments that can now access data via web services. In summary, there now exist a host of ways that researchers can bring IRIS DMC data directly into their workflows. MATLAB users can use irisFetch.m, command line users can use the various Fetch scripts, Java users can use the IRIS-WS library, and Python users may request data through ObsPy. To learn more about any of these clients see http://www.iris.edu/ws/wsclients/.
Kinjo, Akira R.; Suzuki, Hirofumi; Yamashita, Reiko; Ikegawa, Yasuyo; Kudou, Takahiro; Igarashi, Reiko; Kengaku, Yumiko; Cho, Hasumi; Standley, Daron M.; Nakagawa, Atsushi; Nakamura, Haruki
2012-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org) is a member of the worldwide Protein Data Bank (wwPDB) and accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins, which are summarized in this article. To enhance the interoperability of the PDB data, we have recently developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, along with its ontology in the Web Ontology Language (OWL) based on the PDB mmCIF Exchange Dictionary. Being in the standard format for the Semantic Web, the PDB/RDF data provide a means to integrate the PDB with other biological information resources. PMID:21976737
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-01-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
NASA Astrophysics Data System (ADS)
Ninsawat, Sarawut; Yamamoto, Hirokazu; Kamei, Akihide; Nakamura, Ryosuke; Tsuchida, Satoshi; Maeda, Takahisa
2010-05-01
With the availability of network enabled sensing devices, the volume of information being collected by networked sensors has increased dramatically in recent years. Over 100 physical, chemical and biological properties can be sensed using in-situ or remote sensing technology. A collection of these sensor nodes forms a sensor network, which is easily deployable to provide a high degree of visibility into real-world physical processes as events unfold. The sensor observation network could allow gathering of diverse types of data at greater spatial and temporal resolution, through the use of wired or wireless network infrastructure, thus real-time or near-real time data from sensor observation network allow researchers and decision-makers to respond speedily to events. However, in the case of environmental monitoring, only a capability to acquire in-situ data periodically is not sufficient but also the management and proper utilization of data also need to be careful consideration. It requires the implementation of database and IT solutions that are robust, scalable and able to interoperate between difference and distributed stakeholders to provide lucid, timely and accurate update to researchers, planners and citizens. The GEO (Global Earth Observation) Grid is primarily aiming at providing an e-Science infrastructure for the earth science community. The GEO Grid is designed to integrate various kinds of data related to the earth observation using the grid technology, which is developed for sharing data, storage, and computational powers of high performance computing, and is accessible as a set of services. A comprehensive web-based system for integrating field sensor and data satellite image based on various open standards of OGC (Open Geospatial Consortium) specifications has been developed. Web Processing Service (WPS), which is most likely the future direction of Web-GIS, performs the computation of spatial data from distributed data sources and returns the outcome in a standard format. The interoperability capabilities and Service Oriented Architecture (SOA) of web services allow incorporating between sensor network measurement available from Sensor Observation Service (SOS) and satellite remote sensing data from Web Mapping Service (WMS) as distributed data sources for WPS. Various applications have been developed to demonstrate the efficacy of integrating heterogeneous data source. For example, the validation of the MODIS aerosol products (MOD08_D3, the Level-3 MODIS Atmosphere Daily Global Product) by ground-based measurements using the sunphotometer (skyradiometer, Prede POM-02) installed at Phenological Eyes Network (PEN) sites in Japan. Furthermore, the web-based framework system for studying a relationship between calculated Vegetation Index from MODIS satellite image surface reflectance (MOD09GA, the Surface Reflectance Daily L2G Global 1km and 500m Product) and Gross Primary Production (GPP) field measurement at flux tower site in Thailand and Japan has been also developed. The success of both applications will contribute to maximize data utilization and improve accuracy of information by validate MODIS satellite products using high degree of accuracy and temporal measurement of field measurement data.
Tao, Yuan; Liu, Juan
2005-01-01
The Internet has already deflated our world of working and living into a very small scope, thus bringing out the concept of Earth Village, in which people could communicate and co-work though thousands' miles far away from each other. This paper describes a prototype, which is just like an Earth Lab for bioinformatics, based on Web services framework to build up a network architecture for bioinformatics research and for world wide biologists to easily implement enormous, complex processes, and effectively share and access computing resources and data, regardless of how heterogeneous the format of the data is and how decentralized and distributed these resources are around the world. A diminutive and simplified example scenario is given out to realize the prototype after that.
NASA Astrophysics Data System (ADS)
Sabeur, Zoheir; Middleton, Stuart; Veres, Galina; Zlatev, Zlatko; Salvo, Nicola
2010-05-01
The advancement of smart sensor technology in the last few years has led to an increase in the deployment of affordable sensors for monitoring the environment around Europe. This is generating large amounts of sensor observation information and inevitably leading to problems about how to manage large volumes of data as well as making sense out the data for decision-making. In addition, the various European Directives (Water Framework Diectives, Bathing Water Directives, Habitat Directives, etc.. ) which regulate human activities in the environment and the INSPIRE Directive on spatial information management regulations have implicitely led the designated European Member States environment agencies and authorities to put in place new sensor monitoring infrastructure and share information about environmental regions under their statutory responsibilities. They will need to work cross border and collectively reach environmental quality standards. They will also need to regularly report to the EC on the quality of the environments of which they are responsible and make such information accessible to the members of the public. In recent years, early pioneering work on the design of service oriented architecture using sensor networks has been achieved. Information web-services infrastructure using existing data catalogues and web-GIS map services can now be enriched with the deployment of new sensor observation and data fusion and modelling services using OGC standards. The deployment of the new services which describe sensor observations and intelligent data-processing using data fusion techniques can now be implemented and provide added value information with spatial-temporal uncertainties to the next generation of decision support service systems. The new decision support service systems have become key to implement across Europe in order to comply with EU environmental regulations and INSPIRE. In this paper, data fusion services using OGC standards with sensor observation data streams are described in context of a geo-distributed service infrastructure specialising in multiple environmental risk management and decision-support. The sensor data fusion services are deployed and validated in two use cases. These are respectively concerned with: 1) Microbial risks forecast in bathing waters; and 2) Geohazards in urban zones during underground tunneling activities. This research was initiated in the SANY Integrated Project(www.sany-ip.org) and funded by the European Commission under the 6th Framework Programme.
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
Benefits and Challenges of Architecture Frameworks
2011-06-01
systems and identify emerging and obsolete standards. • The NATO Capability View ( NCV ) serves the analysis and optimization of military ca- pabilities... NCVs show the dependencies between different capabilities and allow detecting gaps and overlaps of capabilities. NCVs deliver indirectly requirements...Email (possibly with vendor-specific extensions/modifications) • Proprietary, and possibly not well-documented, message formats • Web services
NASA Astrophysics Data System (ADS)
dias, S. B.; Yang, C.; Li, Z.; XIA, J.; Liu, K.; Gui, Z.; Li, W.
2013-12-01
Global climate change has become one of the biggest concerns for human kind in the 21st century due to its broad impacts on society and ecosystems across the world. Arctic has been observed as one of the most vulnerable regions to the climate change. In order to understand the impacts of climate change on the natural environment, ecosystems, biodiversity and others in the Arctic region, and thus to better support the planning and decision making process, cross-disciplinary researches are required to monitor and analyze changes of Arctic regions such as water, sea level, biodiversity and so on. Conducting such research demands the efficient utilization of various geospatially referenced data, web services and information related to Arctic region. In this paper, we propose a cloud-enabled and service-oriented Spatial Web Portal (SWP) to support the discovery, integration and utilization of Arctic related geospatial resources, serving as a building block of polar CI. This SWP leverages the following techniques: 1) a hybrid searching mechanism combining centralized local search, distributed catalogue search and specialized Internet search for effectively discovering Arctic data and web services from multiple sources; 2) a service-oriented quality-enabled framework for seamless integration and utilization of various geospatial resources; and 3) a cloud-enabled parallel spatial index building approach to facilitate near-real time resource indexing and searching. A proof-of-concept prototype is developed to demonstrate the feasibility of the proposed SWP, using an example of analyzing the Arctic snow cover change over the past 50 years.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.
Linked data and provenance in biological data webs.
Zhao, Jun; Miles, Alistair; Klyne, Graham; Shotton, David
2009-03-01
The Web is now being used as a platform for publishing and linking life science data. The Web's linking architecture can be exploited to join heterogeneous data from multiple sources. However, as data are frequently being updated in a decentralized environment, provenance information becomes critical to providing reliable and trustworthy services to scientists. This article presents design patterns for representing and querying provenance information relating to mapping links between heterogeneous data from sources in the domain of functional genomics. We illustrate the use of named resource description framework (RDF) graphs at different levels of granularity to make provenance assertions about linked data, and demonstrate that these assertions are sufficient to support requirements including data currency, integrity, evidential support and historical queries.
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.
2015-12-01
Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.
Climate Model Diagnostic Analyzer Web Service System
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.
2014-12-01
We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same through web-browser interfaces. The summer school will serve as a valuable testbed for the tool development, preparing CMDA to serve its target community: Earth-science modeling and model-analysis community.
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-01-01
Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model. PMID:25485311
UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies
NASA Astrophysics Data System (ADS)
Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.
2007-12-01
Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced to spend considerable amount of time to solve basic data management and preprocessing tasks and deal with low level computational design problems like parallelization of model codes. Our modeling infrastructure is designed to take care the bulk of the common tasks found in complex earth system models like I/O handling, computational domain and time management, parallel execution of the modeling tasks, etc. The modeling infrastructure allows scientists to focus on the numerical implementation of the physical processes on a single computational objects(typically grid cells) while the framework takes care of the preprocessing of input data, establishing of the data exchange between computation objects and the execution of the science code. In our presentation, we will discuss the key concepts of our modeling infrastructure. We will demonstrate integration of our modeling framework with data services offered by the UNH Earth System Data Collaborative via web interfaces. We will layout the road map to turn our prototype modeling environment into a truly community framework for wide range of earth system scientists and environmental managers.
OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval
Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A.; Criado, Javier
2014-01-01
Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated. PMID:24977211
OntoTrader: an ontological Web trading agent approach for environmental information retrieval.
Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A; Criado, Javier
2014-01-01
Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a "Query-Searching/Recovering-Response" information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.
DTMiner: identification of potential disease targets through biomedical literature mining
Xu, Dong; Zhang, Meizhuo; Xie, Yanping; Wang, Fan; Chen, Ming; Zhu, Kenny Q.; Wei, Jia
2016-01-01
Motivation: Biomedical researchers often search through massive catalogues of literature to look for potential relationships between genes and diseases. Given the rapid growth of biomedical literature, automatic relation extraction, a crucial technology in biomedical literature mining, has shown great potential to support research of gene-related diseases. Existing work in this field has produced datasets that are limited both in scale and accuracy. Results: In this study, we propose a reliable and efficient framework that takes large biomedical literature repositories as inputs, identifies credible relationships between diseases and genes, and presents possible genes related to a given disease and possible diseases related to a given gene. The framework incorporates name entity recognition (NER), which identifies occurrences of genes and diseases in texts, association detection whereby we extract and evaluate features from gene–disease pairs, and ranking algorithms that estimate how closely the pairs are related. The F1-score of the NER phase is 0.87, which is higher than existing studies. The association detection phase takes drastically less time than previous work while maintaining a comparable F1-score of 0.86. The end-to-end result achieves a 0.259 F1-score for the top 50 genes associated with a disease, which performs better than previous work. In addition, we released a web service for public use of the dataset. Availability and Implementation: The implementation of the proposed algorithms is publicly available at http://gdr-web.rwebox.com/public_html/index.php?page=download.php. The web service is available at http://gdr-web.rwebox.com/public_html/index.php. Contact: jenny.wei@astrazeneca.com or kzhu@cs.sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27506226
DTMiner: identification of potential disease targets through biomedical literature mining.
Xu, Dong; Zhang, Meizhuo; Xie, Yanping; Wang, Fan; Chen, Ming; Zhu, Kenny Q; Wei, Jia
2016-12-01
Biomedical researchers often search through massive catalogues of literature to look for potential relationships between genes and diseases. Given the rapid growth of biomedical literature, automatic relation extraction, a crucial technology in biomedical literature mining, has shown great potential to support research of gene-related diseases. Existing work in this field has produced datasets that are limited both in scale and accuracy. In this study, we propose a reliable and efficient framework that takes large biomedical literature repositories as inputs, identifies credible relationships between diseases and genes, and presents possible genes related to a given disease and possible diseases related to a given gene. The framework incorporates name entity recognition (NER), which identifies occurrences of genes and diseases in texts, association detection whereby we extract and evaluate features from gene-disease pairs, and ranking algorithms that estimate how closely the pairs are related. The F1-score of the NER phase is 0.87, which is higher than existing studies. The association detection phase takes drastically less time than previous work while maintaining a comparable F1-score of 0.86. The end-to-end result achieves a 0.259 F1-score for the top 50 genes associated with a disease, which performs better than previous work. In addition, we released a web service for public use of the dataset. The implementation of the proposed algorithms is publicly available at http://gdr-web.rwebox.com/public_html/index.php?page=download.php The web service is available at http://gdr-web.rwebox.com/public_html/index.php CONTACT: jenny.wei@astrazeneca.com or kzhu@cs.sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Development of a Coordinated National Soil Moisture Network: A Pilot Study
NASA Astrophysics Data System (ADS)
Lucido, J. M.; Quiring, S. M.; Verdin, J. P.; Pulwarty, R. S.; Baker, B.; Cosgrove, B.; Escobar, V. M.; Strobel, M.
2014-12-01
Soil moisture data is critical for accurate drought prediction, flood forecasting, climate modeling, prediction of crop yields and water budgeting. However, soil moisture data are collected by many agencies and organizations in the United States using a variety of instruments and methods for varying applications. These data are often distributed and represented in disparate formats, posing significant challenges for use. In recognition of these challenges, the President's Climate Action Plan articulated the need for a coordinated national soil moisture network. In response to this action plan, a team led by the National Integrated Drought Information System has begun to develop a framework for this network and has instituted a proof-of-concept pilot study. This pilot is located in the south-central plains of the US, and will serve as a reference architecture for the requisite data systems and inform the design of the national network. The pilot comprises both in-situ and modeled soil moisture datasets (historical and real-time) and will serve the following use cases: operational drought monitoring, experimental land surface modeling, and operational hydrological modeling. The pilot will be implemented using a distributed network design in order to serve dispersed data in real-time directly from data providers. Standard service protocols will be used to enable future integration with external clients. The pilot network will additionally contain a catalog of data sets and web service endpoints, which will be used to broker web service calls. A mediation and aggregation service will then intelligently request, compile, and transform the distributed datasets from their native formats into a standardized output. This mediation framework allows data to be hosted and maintained locally by the data owners while simplifying access through a single service interface. These data services will then be used to create visualizations, for example, views of the current soil moisture conditions compared to historical baselines via a map-based web application. This talk will comprise an overview of the pilot design and implementation, a discussion of strategies for integrating in-situ and modeled soil moisture data sets as well as lessons learned during the course of the pilot.
A Query Language for Handling Big Observation Data Sets in the Sensor Web
NASA Astrophysics Data System (ADS)
Autermann, Christian; Stasch, Christoph; Jirka, Simon; Koppe, Roland
2017-04-01
The Sensor Web provides a framework for the standardized Web-based sharing of environmental observations and sensor metadata. While the issue of varying data formats and protocols is addressed by these standards, the fast growing size of observational data is imposing new challenges for the application of these standards. Most solutions for handling big observational datasets currently focus on remote sensing applications, while big in-situ datasets relying on vector features still lack a solid approach. Conventional Sensor Web technologies may not be adequate, as the sheer size of the data transmitted and the amount of metadata accumulated may render traditional OGC Sensor Observation Services (SOS) unusable. Besides novel approaches to store and process observation data in place, e.g. by harnessing big data technologies from mainstream IT, the access layer has to be amended to utilize and integrate these large observational data archives into applications and to enable analysis. For this, an extension to the SOS will be discussed that establishes a query language to dynamically process and filter observations at storage level, similar to the OGC Web Coverage Service (WCS) and it's Web Coverage Processing Service (WCPS) extension. This will enable applications to request e.g. spatial or temporal aggregated data sets in a resolution it is able to display or it requires. The approach will be developed and implemented in cooperation with the The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research whose catalogue of data compromises marine observations of physical, chemical and biological phenomena from a wide variety of sensors, including mobile (like research vessels, aircrafts or underwater vehicles) and stationary (like buoys or research stations). Observations are made with a high temporal resolution and the resulting time series may span multiple decades.
Sharing environmental models: An Approach using GitHub repositories and Web Processing Services
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Nuest, Daniel; Pross, Benjamin
2016-04-01
The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project established a spatial data infrastructure for scientific geospatial data and metadata (http://geoportal-glues.ufz.de), where different regional collaborative projects researching the impacts of climate and socio-economic changes on sustainable land management can share their underlying base scenarios and datasets. One goal of the project is to ease the sharing of computational models between institutions and to make them easily executable in Web-based infrastructures. In this work, we present such an approach for sharing computational models relying on GitHub repositories (http://github.com) and Web Processing Services. At first, model providers upload their model implementations to GitHub repositories in order to share them with others. The GitHub platform allows users to submit changes to the model code. The changes can be discussed and reviewed before merging them. However, while GitHub allows sharing and collaborating of model source code, it does not actually allow running these models, which requires efforts to transfer the implementation to a model execution framework. We thus have extended an existing implementation of the OGC Web Processing Service standard (http://www.opengeospatial.org/standards/wps), the 52°North Web Processing Service (http://52north.org/wps) platform to retrieve all model implementations from a git (http://git-scm.com) repository and add them to the collection of published geoprocesses. The current implementation is restricted to models implemented as R scripts using WPS4R annotations (Hinz et al.) and to Java algorithms using the 52°North WPS Java API. The models hence become executable through a standardized Web API by multiple clients such as desktop or browser GIS and modelling frameworks. If the model code is changed on the GitHub platform, the changes are retrieved by the service and the processes will be updated accordingly. The admin tool of the 52°North WPS was extended to support automated retrieval and deployment of computational models from GitHub repositories. Once the R code is available in the GitHub repo, the contained process can be easily deployed and executed by simply defining the GitHub repository URL in the WPS admin tool. We illustrate the usage of the approach by sharing and running a model for land use system archetypes developed by the Helmholtz Centre for Environmental Research (UFZ, see Vaclavik et al.). The original R code was extended and published in the 52°North WPS using both, public and non-public datasets (Nüst et al., see also https://github.com/52North/glues-wps). Hosting the analysis in a Git repository now allows WPS administrators, client developers, and modelers to easily work together on new versions or completely new web processes using the powerful GitHub collaboration platform. References: Hinz, M. et. al. (2013): Spatial Statistics on the Geospatial Web. In: The 16th AGILE International Conference on Geographic Information Science, Short Papers. http://www.agile-online.org/Conference_Paper/CDs/agile_2013/Short_Papers/SP_S3.1_Hinz.pdf Nüst, D. et. al.: (2015): Open and reproducible global land use classification. In: EGU General Assembly Conference Abstracts . Vol. 17. European Geophysical Union, 2015, p. 9125, http://meetingorganizer.copernicus. org/EGU2015/EGU2015- 9125.pdf Vaclavik, T., et. al. (2013): Mapping global land system archetypes. Global Environmental Change 23(6): 1637-1647. Online available: October 9, 2013, DOI: 10.1016/j.gloenvcha.2013.09.004
Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol
2009-01-01
Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal’s gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world’s largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework. PMID:22346693
Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol
2009-01-01
Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal's gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world's largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework.
Common Web Mapping and Mobile Device Framework for Display of NASA Real-time Data
NASA Astrophysics Data System (ADS)
Burks, J. E.
2013-12-01
Scientists have strategic goals to deliver their unique datasets and research to both collaborative partners and more broadly to the public. These datasets can have a significant impact locally and globally as has been shown by the success of the NASA Short-term Prediction Research and Transition (SPoRT) Center and SERVIR programs at Marshall Space Flight Center. Each of these respective organizations provides near real-time data at the best resolution possible to address concerns of the operational weather forecasting community (SPoRT) and to support environmental monitoring and disaster assessment (SERVIR). However, one of the biggest struggles to delivering the data to these and other Earth science community partners is formatting the product to fit into an end user's Decision Support System (DSS). The problem of delivering the data to the end-user's DSS can be a significant impediment to transitioning research to operational environments especially for disaster response where the deliver time is critical. The decision makers, in addition to the DSS, need seamless access to these same datasets from a web browser or a mobile phone for support when they are away from their DSS or for personnel out in the field. A framework has been developed for MSFC Earth Science program that can be used to easily enable seamless delivery of scientific data to end users in multiple formats. The first format is an open geospatial format, Web Mapping Service (WMS), which is easily integrated into most DSSs. The second format is a web browser display, which can be embedded within any MSFC Science web page with just a few lines of web page coding. The third format is accessible in the form of iOS and Android native mobile applications that could be downloaded from an 'app store'. The framework developed has reduced the level of effort needed to bring new and existing NASA datasets to each of these end user platforms and help extend the reach of science data.
Common Web Mapping and Mobile Device Framework for Display of NASA Real-time Data
NASA Technical Reports Server (NTRS)
Burks, Jason
2013-01-01
Scientists have strategic goals to deliver their unique datasets and research to both collaborative partners and more broadly to the public. These datasets can have a significant impact locally and globally as has been shown by the success of the NASA Short-term Prediction Research and Transition (SPoRT) Center and SERVIR programs at Marshall Space Flight Center. Each of these respective organizations provides near real-time data at the best resolution possible to address concerns of the operational weather forecasting community (SPoRT) and to support environmental monitoring and disaster assessment (SERVIR). However, one of the biggest struggles to delivering the data to these and other Earth science community partners is formatting the product to fit into an end user's Decision Support System (DSS). The problem of delivering the data to the end-user's DSS can be a significant impediment to transitioning research to operational environments especially for disaster response where the deliver time is critical. The decision makers, in addition to the DSS, need seamless access to these same datasets from a web browser or a mobile phone for support when they are away from their DSS or for personnel out in the field. A framework has been developed for MSFC Earth Science program that can be used to easily enable seamless delivery of scientific data to end users in multiple formats. The first format is an open geospatial format, Web Mapping Service (WMS), which is easily integrated into most DSSs. The second format is a web browser display, which can be embedded within any MSFC Science web page with just a few lines of web page coding. The third format is accessible in the form of iOS and Android native mobile applications that could be downloaded from an "app store". The framework developed has reduced the level of effort needed to bring new and existing NASA datasets to each of these end user platforms and help extend the reach of science data.
NASA Astrophysics Data System (ADS)
Barak, Miri
2017-10-01
The new guidelines for science education emphasize the need to introduce computers and digital technologies as a means of enabling visualization and data collection and analysis. This requires science teachers to bring advanced technologies into the classroom and use them wisely. Hence, the goal of this study was twofold: to examine the application of web-based technologies in science teacher preparation courses and to examine pre-service teachers' perceptions of "cloud pedagogy"—an instructional framework that applies technologies for the promotion of social constructivist learning. The study included university teachers ( N = 48) and pre-service science teachers ( N = 73). Data were collected from an online survey, written reflections, and interviews. The findings indicated that university teachers use technologies mainly for information management and the distribution of learning materials and less for applying social constructivist pedagogy. University teachers expect their students (i.e., pre-service science teachers) to use digital tools in their future classroom to a greater extent than they themselves do. The findings also indicated that the "cloud pedagogy" was perceived as an appropriate instructional framework for contemporary science education. The application of the cloud pedagogy fosters four attributes: the ability to adapt to frequent changes and uncertain situations, the ability to collaborate and communicate in decentralized environments, the ability to generate data and manage it, and the ability to explore new venous.
Flexible Web services integration: a novel personalised social approach
NASA Astrophysics Data System (ADS)
Metrouh, Abdelmalek; Mokhati, Farid
2018-05-01
Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.
Parmanto, Bambang
2004-01-01
Background The World Wide Web (WWW) has become an increasingly essential resource for health information consumers. The ability to obtain accurate medical information online quickly, conveniently and privately provides health consumers with the opportunity to make informed decisions and participate actively in their personal care. Little is known, however, about whether the content of this online health information is equally accessible to people with disabilities who must rely on special devices or technologies to process online information due to their visual, hearing, mobility, or cognitive limitations. Objective To construct a framework for an automated Web accessibility evaluation; to evaluate the state of accessibility of consumer health information Web sites; and to investigate the possible relationships between accessibility and other features of the Web sites, including function, popularity and importance. Methods We carried out a cross-sectional study of the state of accessibility of health information Web sites to people with disabilities. We selected 108 consumer health information Web sites from the directory service of a Web search engine. A measurement framework was constructed to automatically measure the level of Web Accessibility Barriers (WAB) of Web sites following Web accessibility specifications. We investigated whether there was a difference between WAB scores across various functional categories of the Web sites, and also evaluated the correlation between the WAB and Alexa traffic rank and Google Page Rank of the Web sites. Results We found that none of the Web sites we looked at are completely accessible to people with disabilities, i.e., there were no sites that had no violation of Web accessibility rules. However, governmental and educational health information Web sites do exhibit better Web accessibility than the other categories of Web sites (P < 0.001). We also found that the correlation between the WAB score and the popularity of a Web site is statistically significant (r = 0.28, P < 0.05), although there is no correlation between the WAB score and the importance of the Web sites (r = 0.15, P = 0.111). Conclusions Evaluation of health information Web sites shows that no Web site scrupulously abides by Web accessibility specifications, even for entities mandated under relevant laws and regulations. Government and education Web sites show better performance than Web sites among other categories. Accessibility of a Web site may have a positive impact on its popularity in general. However, the Web accessibility of a Web site may not have a significant relationship with its importance on the Web. PMID:15249268
[Internet presence of psychiatrists in private practice. Status, chances and legal framework].
Kuhnigk, O; Ramuschkat, M; Schreiner, J; Schäfer, I; Reimer, J
2013-05-01
Physicians increasingly use home pages to call attention to their practice. Based on predefined criteria, this study examines the web presence of psychiatrists and medical psychotherapists in private practice. All registered psychiatrists and psychotherapists of six northern German states were examined in May 2010 with regards to existence and quality of their web presence. Homepages were evaluated by means of a standardized criteria catalogue with 42 items. Statistical analysis comprised descriptive and analytic approaches (ANOVA, linear models). The analysis included 956 physicians, 168 of whom (17.6%) had a web presence. More physicians in city states had a web presence as compared to those in states with larger territories. However, there was no difference between eastern and western states. Male as compared to female physicians more often possessed an Internet presence. The average score was 19 (± 5.2) out of 42 items, with practices with more than one physician scoring higher than single physician practices. Websites often contained general information about the practice, medical services and diseases, and rarely online services, professional information about the physician, access for disabled, emergency services and holiday substitution. Legal requirements were not sufficiently considered by more than half of the physicians. Only a smaller number of psychiatrists and psychotherapists in private practice make use of their own web presence. The quality of information varies. The criteria catalogue used in this study may offer a guideline for development of a good quality Internet presence. A consensus Internet guideline with participation of physician chambers and medical societies would be of use to establish quality standards.
Distance Learning Courses on the Web: The Authoring Approach.
ERIC Educational Resources Information Center
Santos, Neide; Diaz, Alicia; Bibbo, Luis Mariano
This paper proposes a framework for supporting the authoring process of distance learning courses. An overview of distance learning courses and the World Wide Web is presented. The proposed framework is then described, including: (1) components of the framework--a hypermedia design methodology for authoring the course, links to related Web sites,…
NASA Astrophysics Data System (ADS)
Eberle, J.; Hüttich, C.; Schmullius, C.
2014-12-01
Spatial time series data are freely available around the globe from earth observation satellites and meteorological stations for many years until now. They provide useful and important information to detect ongoing changes of the environment; but for end-users it is often too complex to extract this information out of the original time series datasets. This issue led to the development of the Earth Observation Monitor (EOM), an operational framework and research project to provide simple access, analysis and monitoring tools for global spatial time series data. A multi-source data processing middleware in the backend is linked to MODIS data from Land Processes Distributed Archive Center (LP DAAC) and Google Earth Engine as well as daily climate station data from NOAA National Climatic Data Center. OGC Web Processing Services are used to integrate datasets from linked data providers or external OGC-compliant interfaces to the EOM. Users can either use the web portal (webEOM) or the mobile application (mobileEOM) to execute these processing services and to retrieve the requested data for a given point or polygon in userfriendly file formats (CSV, GeoTiff). Beside providing just data access tools, users can also do further time series analyses like trend calculations, breakpoint detections or the derivation of phenological parameters from vegetation time series data. Furthermore data from climate stations can be aggregated over a given time interval. Calculated results can be visualized in the client and downloaded for offline usage. Automated monitoring and alerting of the time series data integrated by the user is provided by an OGC Sensor Observation Service with a coupled OGC Web Notification Service. Users can decide which datasets and parameters are monitored with a given filter expression (e.g., precipitation value higher than x millimeter per day, occurrence of a MODIS Fire point, detection of a time series anomaly). Datasets integrated in the SOS service are updated in near-realtime based on the linked data providers mentioned above. An alert is automatically pushed to the user if the new data meets the conditions of the registered filter expression. This monitoring service is available on the web portal with alerting by email and within the mobile app with alerting by email and push notification.
Integrating Space Communication Network Capabilities via Web Portal Technologies
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Lee, Carlyn-Ann; Lau, Chi-Wung; Cheung, Kar-Ming; Levesque, Michael; Carruth, Butch; Coffman, Adam; Wallace, Mike
2014-01-01
We have developed a service portal prototype as part of an investigation into the feasibility of using Java portlet technology as a means of providing integrated access to NASA communications network services. Portal servers provide an attractive platform for this role due to the various built-in collaboration applications they can provide, combined with the possibility to develop custom inter-operating portlets to extent their functionality while preserving common presentation and behavior. This paper describes various options for integration of network services related to planning and scheduling, and results based on use of a popular open-source portal framework. Plans are underway to develop an operational SCaN Service Portal, building on the experiences reported here.
Information System through ANIS at CeSAM
NASA Astrophysics Data System (ADS)
Moreau, C.; Agneray, F.; Gimenez, S.
2015-09-01
ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).
An Invitation to Collaborate: The SPIRIT Open Source Health Care Portal
Bray, Brian; Molin, Joseph Dal
2001-01-01
The SPIRIT portal is a web site resulting from a joint project of the European Commission 5th Framework Research Programme for Information Society Technologies, Minoru Development (France), Conecta srl (Italy), and Sistema Information Systems (Italy). The portal indexes and disseminates free software, serves as a meeting point for health care informatics researchers, and provides collaboration services to health care innovators. This poster session describes the services of the portal and invites researchers to join a worldwide collaborative community developing evidence based health care solutions.
Proposal for a Web Encoding Service (wes) for Spatial Data Transactio
NASA Astrophysics Data System (ADS)
Siew, C. B.; Peters, S.; Rahman, A. A.
2015-10-01
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.
A UIMA wrapper for the NCBO annotator
Roeder, Christophe; Jonquet, Clement; Shah, Nigam H.; Baumgartner, William A.; Verspoor, Karin; Hunter, Lawrence
2010-01-01
Summary: The Unstructured Information Management Architecture (UIMA) framework and web services are emerging as useful tools for integrating biomedical text mining tools. This note describes our work, which wraps the National Center for Biomedical Ontology (NCBO) Annotator—an ontology-based annotation service—to make it available as a component in UIMA workflows. Availability: This wrapper is freely available on the web at http://bionlp-uima.sourceforge.net/ as part of the UIMA tools distribution from the Center for Computational Pharmacology (CCP) at the University of Colorado School of Medicine. It has been implemented in Java for support on Mac OS X, Linux and MS Windows. Contact: chris.roeder@ucdenver.edu PMID:20505005
Bridging the gap between Hydrologic and Atmospheric communities through a standard based framework
NASA Astrophysics Data System (ADS)
Boldrini, E.; Salas, F.; Maidment, D. R.; Mazzetti, P.; Santoro, M.; Nativi, S.; Domenico, B.
2012-04-01
Data interoperability in the study of Earth sciences is essential to performing interdisciplinary multi-scale multi-dimensional analyses (e.g. hydrologic impacts of global warming, regional urbanization, global population growth etc.). This research aims to bridge the existing gap between hydrologic and atmospheric communities both at semantic and technological levels. Within the context of hydrology, scientists are usually concerned with data organized as time series: a time series can be seen as a variable measured at a particular point in space over a period of time (e.g. the stream flow values as periodically measured by a buoy sensor in a river); atmospheric scientists instead usually organize their data as coverages: a coverage can be seen as a multidimensional data array (e.g. satellite images acquired through time). These differences make non-trivial the set up of a common framework to perform data discovery and access. A set of web services specifications and implementations is already in place in both the scientific communities to allow data discovery and access in the different domains. The CUAHSI-Hydrologic Information System (HIS) service stack lists different services types and implementations: - a metacatalog (implemented as a CSW) used to discover metadata services by distributing the query to a set of catalogs - time series catalogs (implemented as CSW) used to discover datasets published by the feature services - feature services (implemented as WFS) containing features with data access link - sensor observation services (implemented as SOS) enabling access to the stream of acquisitions Within the Unidata framework, there lies a similar service stack for atmospheric data: - the broker service (implemented as a CSW) distributes a user query to a set of heterogeneous services (i.e. catalogs services, but also inventory and access services) - the catalog service (implemented as a CSW) is able to harvest the available metadata offered by THREDDS services, and executes complex queries against the available metadata. - inventory service (implemented as a THREDDS) being able to hierarchically organize and publish a local collection of multi-dimensional arrays (e.g. NetCDF, GRIB files), as well as publish auxiliary standard services to realize the actual data access and visualization (e.g. WCS, OPeNDAP, WMS). The approach followed in this research is to build on top of the existing standards and implementations, by setting up a standard-aware interoperable framework, able to deal with the existing heterogeneity in an organic way. As a methodology, interoperability tests against real services were performed; existing problems were thus highlighted and possibly solved. The use of flexible tools, able to deal in a smart way with heterogeneity has proven to be successful, in particular experiments were carried on with both GI-cat broker and ESRI GeoPortal frameworks. GI-cat discovery broker was proven successful at implementing the CSW interface, as well as federating heterogeneous resources, such as THREDDS and WCS services published by Unidata, HydroServer, WFS and SOS services published by CUAHSI. Experiments with ESRI GeoPortal were also successful: the GeoPortal was used to deploy a web interface able to distribute searches amongst catalog implementations from both the hydrologic and the atmospheric communities, including HydroServers and GI-cat, combining results from both the domains in a seamless way.
Client-controlled case information: a general system theory perspective.
Fitch, Dale
2004-07-01
The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of controller and controlled system, as well as entropy and negentropy, are applied to the information flow and autopoietic behavior as they relate to the boundary-maintaining functions of today's organizations. The author's conclusions synthesize general system theory and human services values to lay the foundation for an information-sharing framework for human services in the 21st century.
A Prototype Land Information Sensor Web: Design, Implementation and Implication for the SMAP Mission
NASA Astrophysics Data System (ADS)
Su, H.; Houser, P.; Tian, Y.; Geiger, J. K.; Kumar, S. V.; Gates, L.
2009-12-01
Land Surface Model (LSM) predictions are regular in time and space, but these predictions are influenced by errors in model structure, input variables, parameters and inadequate treatment of sub-grid scale spatial variability. Consequently, LSM predictions are significantly improved through observation constraints made in a data assimilation framework. Several multi-sensor satellites are currently operating which provide multiple global observations of the land surface, and its related near-atmospheric properties. However, these observations are not optimal for addressing current and future land surface environmental problems. To meet future earth system science challenges, NASA will develop constellations of smart satellites in sensor web configurations which provide timely on-demand data and analysis to users, and can be reconfigured based on the changing needs of science and available technology. A sensor web is more than a collection of satellite sensors. That means a sensor web is a system composed of multiple platforms interconnected by a communication network for the purpose of performing specific observations and processing data required to support specific science goals. Sensor webs can eclipse the value of disparate sensor components by reducing response time and increasing scientific value, especially when the two-way interaction between the model and the sensor web is enabled. The study of a prototype Land Information Sensor Web (LISW) is sponsored by NASA, trying to integrate the Land Information System (LIS) in a sensor web framework which allows for optimal 2-way information flow that enhances land surface modeling using sensor web observations, and in turn allows sensor web reconfiguration to minimize overall system uncertainty. This prototype is based on a simulated interactive sensor web, which is then used to exercise and optimize the sensor web modeling interfaces. The Land Information Sensor Web Service-Oriented Architecture (LISW-SOA) has been developed and it is the very first sensor web framework developed especially for the land surface studies. Synthetic experiments based on the LISW-SOA and the virtual sensor web provide a controlled environment in which to examine the end-to-end performance of the prototype, the impact of various sensor web design trade-offs and the eventual value of sensor webs for a particular prediction or decision support. In this paper, the design, implementation of the LISW-SOA and the implication for the Soil Moisture Active and Passive (SMAP) mission is presented. Particular attention is focused on examining the relationship between the economic investment on a sensor web (space and air borne, ground based) and the accuracy of the model predicted soil moisture, which can be achieved by using such sensor observations. The Study of Virtual Land Information Sensor Web (LISW) is expected to provide some necessary a priori knowledge for designing and deploying the next generation Global Earth Observing System of systems (GEOSS).
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
The International Solid Earth Research Virtual Observatory
NASA Astrophysics Data System (ADS)
Fox, G.; Pierce, M.; Rundle, J.; Donnellan, A.; Parker, J.; Granat, R.; Lyzenga, G.; McLeod, D.; Grant, L.
2004-12-01
We describe the architecture and initial implementation of the International Solid Earth Research Virtual Observatory (iSERVO). This has been prototyped within the USA as SERVOGrid and expansion is planned to Australia, China, Japan and other countries. We base our design on a globally scalable distributed "cyber-infrastructure" or Grid built around a Web Services-based approach consistent with the extended Web Service Interoperability approach. The Solid Earth Science Working Group of NASA has identified several challenges for Earth Science research. In order to investigate these, we need to couple numerical simulation codes and data mining tools to observational data sets. This observational data are now available on-line in internet-accessible forms, and the quantity of this data is expected to grow explosively over the next decade. We architect iSERVO as a loosely federated Grid of Grids with each country involved supporting a national Solid Earth Research Grid. The national Grid Operations, possibly with dedicated control centers, are linked together to support iSERVO where an International Grid control center may eventually be necessary. We address the difficult multi-administrative domain security and ownership issues by exposing capabilities as services for which the risk of abuse is minimized. We support large scale simulations within a single domain using service-hosted tools (mesh generation, data repository and sensor access, GIS, visualization). Simulations typically involve sequential or parallel machines in a single domain supported by cross-continent services. We use Web Services implement Service Oriented Architecture (SOA) using WSDL for service description and SOAP for message formats. These are augmented by UDDI, WS-Security, WS-Notification/Eventing and WS-ReliableMessaging in the WS-I+ approach. Support for the latter two capabilities will be available over the next 6 months from the NaradaBrokering messaging system. We augment these specifications with the powerful portlet architecture using WSRP and JSR168 supported by such portal containers as uPortal, WebSphere, and Apache JetSpeed2. The latter portal aggregates component user interfaces for each iSERVO service allowing flexible customization of the user interface. We exploit the portlets produced by the NSF NMI (Middleware initiative) OGCE activity. iSERVO also uses specifications from the Open Geographical Information Systems (GIS) Consortium (OGC) that defines a number of standards for modeling earth surface feature data and services for interacting with this data. The data models are expressed in the XML-based Geography Markup Language (GML), and the OGC service framework are being adapted to use the Web Service model. The SERVO prototype includes a GIS Grid that currently includes the core WMS and WFS (Map and Feature) services. We will follow the best practice in the Grid and Web Service field and will adapt our technology as appropriate. For example, we expect to support services built on WS-RF when is finalized and to make use of the database interfaces OGSA-DAI and its WS-I+ versions. Finally, we review advances in Web Service scripting (such as HPSearch) and workflow systems (such as GCF) and their applications to iSERVO.
IRRIMET: a web 2.0 advisory service for irrigation water management
NASA Astrophysics Data System (ADS)
De Michele, Carlo; Anzano, Enrico; Colandrea, Marco; Marotta, Luigi; Mula, Ileana; Pelosi, Anna; D'Urso, Guido; Battista Chirico, Giovanni
2016-04-01
Irrigation agriculture is one the biggest consumer of water in Europe, especially in southern regions, where it accounts for up to 70% of the total water consumption. The EU Common Agricultural Policy, combined with the Water Framework Directive, imposes to farmers and irrigation managers a substantial increase of the efficiency in the use of water in agriculture for the next decade. Irrigating according to reliable crop water requirement estimates is one of the most convincing solution to decrease agricultural water use. Here we present an innovative irrigation advisory service, applied in Campania region (Southern Italy), where a satellite assisted irrigation advisory service has been operating since 2006. The advisory service is based on the optimal combination of VIS-NIR high resolution satellite images (Landsat, Deimos, Rapideye) to map crop vigour, and high resolution numerical weather prediction for assessing the meteorological variables driving the crop water needs in the short-medium range. The advisory service is broadcasted with a simple and intuitive web app interface which makes daily real time irrigation and evapotranspiration maps and customized weather forecasts (based on Cosmo Leps model) accessible from desktop computers, tablets and smartphones.
Consumer Health Informatics Aspects of Direct-to-Consumer Personal Genomic Testing.
Gray, Kathleen; Stephen, Remya; Terrill, Bronwyn; Wilson, Brenda; Middleton, Anna; Tytherleigh, Rigan; Turbitt, Erin; Gaff, Clara; Savard, Jacqueline; Hickerton, Chriselle; Newson, Ainsley; Metcalfe, Sylvia
2017-01-01
This paper uses consumer health informatics as a framework to explore whether and how direct-to-consumer personal genomic testing can be regarded as a form of information which assists consumers to manage their health. It presents findings from qualitative content analysis of web sites that offer testing services, and of transcripts from focus groups conducted as part a study of the Australian public's expectations of personal genomics. Content analysis showed that service offerings have some features of consumer health information but lack consistency. Focus group participants were mostly unfamiliar with the specifics of test reports and related information services. Some of their ideas about aids to knowledge were in line with the benefits described on provider web sites, but some expectations were inflated. People were ambivalent about whether these services would address consumers' health needs, interests and contexts and whether they would support consumers' health self-management decisions and outcomes. There is scope for consumer health informatics approaches to refine the usage and the utility of direct-to-consumer personal genomic testing. Further research may focus on how uptake is affected by consumers' health literacy or by services' engagement with consumers about what they really want.
Dynamic selection mechanism for quality of service aware web services
NASA Astrophysics Data System (ADS)
D'Mello, Demian Antony; Ananthanarayana, V. S.
2010-02-01
A web service is an interface of the software component that can be accessed by standard Internet protocols. The web service technology enables an application to application communication and interoperability. The increasing number of web service providers throughout the globe have produced numerous web services providing the same or similar functionality. This necessitates the use of tools and techniques to search the suitable services available over the Web. UDDI (universal description, discovery and integration) is the first initiative to find the suitable web services based on the requester's functional demands. However, the requester's requirements may also include non-functional aspects like quality of service (QoS). In this paper, the authors define a QoS model for QoS aware and business driven web service publishing and selection. The authors propose a QoS requirement format for the requesters, to specify their complex demands on QoS for the web service selection. The authors define a tree structure called quality constraint tree (QCT) to represent the requester's variety of requirements on QoS properties having varied preferences. The paper proposes a QoS broker based architecture for web service selection, which facilitates the requesters to specify their QoS requirements to select qualitatively optimal web service. A web service selection algorithm is presented, which ranks the functionally similar web services based on the degree of satisfaction of the requester's QoS requirements and preferences. The paper defines web service provider qualities to distinguish qualitatively competitive web services. The paper also presents the modelling and selection mechanism for the requester's alternative constraints defined on the QoS. The authors implement the QoS broker based system to prove the correctness of the proposed web service selection mechanism.
EO Domain Specific Knowledge Enabled Services (KES-B)
NASA Astrophysics Data System (ADS)
Varas, J.; Busto, J.; Torguet, R.
2004-09-01
This paper recovers and describes a number of major statements with respect to the vision, mission and technological approaches of the Technological Research Project (TRP) "EO Domain Specific Knowledge Enabled Services" (project acronym KES-B), which is currently under development at the European Space Research Institute (ESRIN) under contract "16397/02/I- SB". Resulting from the on-going R&D activities, the KES-B project aims are to demonstrate with a prototype system the feasibility of the application of innovative knowledge-based technologies to provide services for easy, scheduled and controlled exploitation of EO resources (e.g.: data, algorithms, procedures, storage, processors, ...), to automate the generation of products, and to support users in easily identifying and accessing the required information or products by using their own vocabulary, domain knowledge and preferences. The ultimate goals of KES-B are summarized in the provision of the two main types of KES services: 1st the Search service (also referred to as Product Exploitation or Information Retrieval; and 2nd the Production service (also referred to as Information Extraction), with the strategic advantage that they are enabled by Knowledge consolidated (formalized) within the system. The KES-B system technical solution approach is driven by a strong commitment for the adoption of industry (XML-based) language standards, aiming to have an interoperable, scalable and flexible operational prototype. In that sense, the Search KES services builds on the basis of the adoption of consolidated and/or emergent W3C semantic-web standards. Remarkably the languages/models Dublin Core (DC), Universal Resource Identifier (URI), Resource Description Framework (RDF) and Ontology Web Language (OWL), and COTS like Protege [1] and JENA [2] are being integrated in the system as building bricks for the construction of the KES based Search services. On the other hand, the Production KES services builds on top of workflow management standards and tools. In this side, the Business Process Execution Language (BPEL), the Web Services Definition Language (WSDL), and the Collaxa [3] COTS tool for workflow management are being integrated for the construction of the KES-B Production Services. The KES-B platform (web portal and web-server) architecture is build on the basis of the J2EE reference architecture. These languages represent the mean for the codification of the different types of knowledge that are to be formalized in the system. This representing the ontological architecture of the system. This shall enable in fact the interoperability with other KES-based systems committing as well to those standards. The motivation behind this vision is pointing towards the construction of the Semantic-web based GRID supply- chain infrastructure for EO-services, in line with the INSPIRE initiative suggestions.
WebDISCO: a web service for distributed cox model learning without patient-level data sharing.
Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila
2015-11-01
The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Borderless Geospatial Web (bolegweb)
NASA Astrophysics Data System (ADS)
Cetl, V.; Kliment, T.; Kliment, M.
2016-06-01
The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.
Realising the Uncertainty Enabled Model Web
NASA Astrophysics Data System (ADS)
Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.
2012-12-01
The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address conversion between uncertainty types, and between the spatial / temporal support of service inputs / outputs. Finally we describe the tools being generated within the UncertWeb project, considering three main aspects: i) Elicitation of uncertainties on model inputs. We are developing tools to enable domain experts to provide judgements about input uncertainties from UncertWeb model components (e.g. parameters in meteorological models) which allow panels of experts to engage in the process and reach a consensus view on the current knowledge / beliefs about that parameter or variable. We are developing systems for continuous and categorical variables as well as stationary spatial fields. ii) Visualisation of the resulting uncertain outputs from the end of the workflow, but also at intermediate steps. At this point we have prototype implementations driven by the requirements from the use cases that motivate UncertWeb. iii) Sensitivity and uncertainty analysis on model outputs. Here we show the design of the overall system we are developing, including the deployment of an emulator framework to allow computationally efficient approaches. We conclude with a summary of the open issues and remaining challenges we are facing in UncertWeb, and provide a brief overview of how we plan to tackle these.
The peer review system (PRS) for quality assurance and treatment improvement in radiation therapy
NASA Astrophysics Data System (ADS)
Le, Anh H. T.; Kapoor, Rishabh; Palta, Jatinder R.
2012-02-01
Peer reviews are needed across all disciplines of medicine to address complex medical challenges in disease care, medical safety, insurance coverage handling, and public safety. Radiation therapy utilizes technologically advanced imaging for treatment planning, often with excellent efficacy. Since planning data requirements are substantial, patients are at risk for repeat diagnostic procedures or suboptimal therapeutic intervention due to a lack of knowledge regarding previous treatments. The Peer Review System (PRS) will make this critical radiation therapy information readily available on demand via Web technology. The PRS system has been developed with current Web technology, .NET framework, and in-house DICOM library. With the advantages of Web server-client architecture, including IIS web server, SOAP Web Services and Silverlight for the client side, the patient data can be visualized through web browser and distributed across multiple locations by the local area network and Internet. This PRS will significantly improve the quality, safety, and accessibility, of treatment plans in cancer therapy. Furthermore, the secure Web-based PRS with DICOM-RT compliance will provide flexible utilities for organization, sorting, and retrieval of imaging studies and treatment plans to optimize the patient treatment and ultimately improve patient safety and treatment quality.
NASA Astrophysics Data System (ADS)
Juanle, Wang; Shuang, Li; Yunqiang, Zhu
2005-10-01
According to the requirements of China National Scientific Data Sharing Program (NSDSP), the research and development of web oriented RS Image Publication System (RSIPS) is based on Java Servlet technique. The designing of RSIPS framework is composed of 3 tiers, which is Presentation Tier, Application Service Tier and Data Resource Tier. Presentation Tier provides user interface for data query, review and download. For the convenience of users, visual spatial query interface is included. Served as a middle tier, Application Service Tier controls all actions between users and databases. Data Resources Tier stores RS images in file and relationship databases. RSIPS is developed with cross platform programming based on Java Servlet tools, which is one of advanced techniques in J2EE architecture. RSIPS's prototype has been developed and applied in the geosciences clearinghouse practice which is among the experiment units of NSDSP in China.
Toxicology ontology perspectives.
Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae
2012-01-01
The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.
2012-01-01
Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/ PMID:22541598
Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry
2012-04-24
The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/
The EuroGEOSS Advanced Operating Capacity
NASA Astrophysics Data System (ADS)
Nativi, S.; Vaccari, L.; Stock, K.; Diaz, L.; Santoro, M.
2012-04-01
The concept of multidisciplinary interoperability for managing societal issues is a major challenge presently faced by the Earth and Space Science Informatics community. With this in mind, EuroGEOSS project was launched on May 1st 2009 for a three year period aiming to demonstrate the added value to the scientific community and society of providing existing earth observing systems and applications in an interoperable manner and used within the GEOSS and INSPIRE frameworks. In the first period, the project built an Initial Operating Capability (IOC) in the three strategic areas of Drought, Forestry and Biodiversity; this was then enhanced into an Advanced Operating Capacity (AOC) for multidisciplinary interoperability. Finally, the project extended the infrastructure to other scientific domains (geology, hydrology, etc.). The EuroGEOSS multidisciplinary AOC is based on the Brokering Approach. This approach aims to achieve multidisciplinary interoperability by developing an extended SOA (Service Oriented Architecture) where a new type of "expert" components is introduced: the Broker. These implement all mediation and distribution functionalities needed to interconnect the distributed and heterogeneous resources characterizing a System of Systems (SoS) environment. The EuroGEOSS AOC is comprised of the following components: • EuroGEOSS Discovery Broker: providing harmonized discovery functionalities by mediating and distributing user queries against tens of heterogeneous services; • EuroGEOSS Access Broker: enabling users to seamlessly access and use heterogeneous remote resources via a unique and standard service; • EuroGEOSS Web 2.0 Broker: enhancing the capabilities of the Discovery Broker with queries towards the new Web 2.0 services; • EuroGEOSS Semantic Discovery Broker: enhancing the capabilities of the Discovery Broker with semantic query-expansion; • EuroGEOSS Natural Language Search Component: providing users with the possibilities to search for resources using natural language queries; • Service Composition Broker: allowing users to compose and execute complex Business Processes, based on the technology developed by the FP7 UncertWeb project. Recently, the EuroGEOSS Brokering framework was presented at the GEO-VIII Plenary and Exhibition in Istanbul and introduced into the GEOSS Common Infrastructure.
A Common Metadata System for Marine Data Portals
NASA Astrophysics Data System (ADS)
Wosniok, C.; Breitbach, G.; Lehfeldt, R.
2012-04-01
Processing and allocation of marine datasets depend on the nature of the data resulting from field campaigns, continuous monitoring and numerical modeling. Two research and development projects in northern Germany manage different types of marine data. Due to different data characteristics and institutional frameworks separate data portals are required. This paper describes the integration of distributed marine data in Germany. The Marine Data Infrastructure of Germany (MDI-DE) supports public authorities in the German coastal zone with the implementation of European directives like INSPIRE or the Marine Strategy Framework Directive. This is carried out through setting up standardized web services within a network of participating coastal agencies and the installation of a common data portal (http://www.mdi-de.org), which integrates distributed marine data concerning coastal engineering, coastal water protection and nature conservation in an interoperable and harmonized manner for administrative and scientific purposes as well as for information of the general public. The Coastal Observation System for Northern and Arctic Seas (COSYNA) aims at developing and testing analysis systems for the operational synoptic description of the environmental status of the North Sea and of Arctic coastal waters. This is done by establishing a network of monitoring facilities and the provision of its data in near-real-time. In situ measurements with poles, ferry boxes, and buoys, together with remote sensing measurements, and the data assimilation of these data into simulation results enables COSYNA to provide pre-operational 'products', that are beyond the present routinely applied techniques in observation and modelling. The data allocation in near-real-time requires thoroughly executed data validation, which is processed on the fly before data is passed on to the COSYNA portal (http://kofserver2.hzg.de/codm/). Both projects apply OGC standards such as Web Mapping Service (WMS), Web Feature Service (WFS) and Sensor Observation Service (SOS), which ensures interoperability and extensibility. In addition, metadata as crucial components for searching and finding information in large data infrastructures is provided via the Catalogue Web Service (CS-W). MDI-DE and COSYNA rely on the metadata information system for marine metadata NOKIS, which reflects a metadata profile tailored for marine data according to the specifications of German coastal authorities. In spite of this common software base, interoperability between the two data collections requires constant alignments of the diverse data processed by the two portals. While monitoring data in the MDI-DE is currently rather campaign-based, COSYNA has to fit constantly evolving time series into metadata sets. With all data following the same metadata profile, we now reach full interoperability between the different data collections. The distributed marine information system provides options to search, find and visualise the harmonised results from continuous monitoring, field campaigns, numerical modeling and other data in one web client.
OpenSearch technology for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo
2010-05-01
In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above mentioned syndication formats; the mapping processing, is bidirectional. When GI-cat is used to access OpenSearch compliant services, the CSW query must be mapped to the OpenSearch query, and the response elements, must be translated according to the GI-cat internal data model. As results of such extensions, GI-cat provides a user friendly facade to the complex CSW interface, thus enabling it to be queried, for example, using a browser toolbar.
NASA Astrophysics Data System (ADS)
Wong, John-Michael; Stojadinovic, Bozidar
2005-05-01
A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.
ERIC Educational Resources Information Center
Zhang, Shenglan; Duke, Nell K.
2011-01-01
Much research has demonstrated that students are largely uncritical users of Web sites as sources of information. Research-tested frameworks are needed to increase elementary-age students' awareness of the need and ability to critically evaluate Web sites as sources of information. This study is a randomized field trial of such a framework called…
Data Access and Web Services at the EarthScope Plate Boundary Observatory
NASA Astrophysics Data System (ADS)
Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.
2007-12-01
The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
The Plate Boundary Observatory: Community Focused Web Services
NASA Astrophysics Data System (ADS)
Matykiewicz, J.; Anderson, G.; Lee, E.; Hoyt, B.; Hodgkinson, K.; Persson, E.; Wright, J.; Torrez, D.; Jackson, M.
2006-12-01
The Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 852 continuous GPS stations, 103 borehole strainmeter stations, 28 tiltmeters, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of channels, including map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
Software framework for the upcoming MMT Observatory primary mirror re-aluminization
NASA Astrophysics Data System (ADS)
Gibson, J. Duane; Clark, Dusty; Porter, Dallan
2014-07-01
Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI's) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized "real-time" web application GUI's to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.
Automated Data Quality Assurance using OGC Sensor Web Enablement Frameworks for Marine Observatories
NASA Astrophysics Data System (ADS)
Toma, Daniel; Bghiel, Ikram; del Rio, Joaquin; Hidalgo, Alberto; Carreras, Normandino; Manuel, Antoni
2014-05-01
Over the past years, environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent. Therefore, many sensor networks are increasingly deployed to monitor our environment. But due to the large number of sensor manufacturers, accompanying protocols and data encoding, automated integration and data quality assurance of diverse sensors in an observing systems is not straightforward, requiring development of data management code and manual tedious configuration. However, over the past few years it has been demonstrated that Open-Geospatial Consortium (OGC) frameworks can enable web services with fully-described sensor systems, including data processing, sensor characteristics and quality control tests and results. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The data management software which enables access to sensors, data processing and quality control tests has to be implemented and the results have to be manually mapped to the SWE models. In this contribution, we describe a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) OGC PUCK protocol - a simple standard embedded instrument protocol to store and retrieve directly from the devices the declarative description of sensor characteristics and quality control tests, (2) an automatic mechanism for data processing and quality control tests underlying the Sensor Web - the Sensor Interface Descriptor (SID) concept, as well as (3) a model for the declarative description of sensor which serves as a generic data management mechanism - designed as a profile and extension of OGC SWE's SensorML standard. We implement and evaluate our approach by applying it to the OBSEA Observatory, and can be used to demonstrate the ability to assess data quality for temperature, salinity, air pressure and wind speed and direction observations off the coast of Garraf, in the north-eastern Spain.
A Framework for Modeling and Simulation of the Artificial
2012-01-01
y or n) >> y Name: petra Simple Aspects: face_shape/thin, nose/small, skintone/light, hair_color/black, hair_type/curly Integrated Aspects...Multiconference. Orlando, FL (2012) 23. Mittal, S., Risco- Martin , J.: Netcentric System of Systems Engineering with DEVS Unified Process. CRC Press (2012) 24...Mittal, S., Risco- Martin , J., Zeigler, B.: DEVS-based simulation web services for net-centric T&E. In: Proceedings of the 2007 summer computer
Framework of sensor-based monitoring for pervasive patient care.
Triantafyllidis, Andreas K; Koutkias, Vassilis G; Chouvarda, Ioanna; Adami, Ilia; Kouroubali, Angelina; Maglaveras, Nicos
2016-09-01
Sensor-based health systems can often become difficult to use, extend and sustain. The authors propose a framework for designing sensor-based health monitoring systems aiming to provide extensible and usable monitoring services in the scope of pervasive patient care. The authors' approach relies on a distributed system for monitoring the patient health status anytime-anywhere and detecting potential health complications, for which healthcare professionals and patients are notified accordingly. Portable or wearable sensing devices measure the patient's physiological parameters, a smart mobile device collects and analyses the sensor data, a Medical Center system receives notifications on the detected health condition, and a Health Professional Platform is used by formal caregivers in order to review the patient condition and configure monitoring schemas. A Service-oriented architecture is utilised to provide extensible functional components and interoperable interactions among the diversified system components. The framework was applied within the REMOTE ambient-assisted living project in which a prototype system was developed, utilising Bluetooth to communicate with the sensors and Web services for data exchange. A scenario of using the REMOTE system and preliminary usability results show the applicability, usefulness and virtue of our approach.
Framework of sensor-based monitoring for pervasive patient care
Koutkias, Vassilis G.; Chouvarda, Ioanna; Adami, Ilia; Kouroubali, Angelina; Maglaveras, Nicos
2016-01-01
Sensor-based health systems can often become difficult to use, extend and sustain. The authors propose a framework for designing sensor-based health monitoring systems aiming to provide extensible and usable monitoring services in the scope of pervasive patient care. The authors’ approach relies on a distributed system for monitoring the patient health status anytime-anywhere and detecting potential health complications, for which healthcare professionals and patients are notified accordingly. Portable or wearable sensing devices measure the patient's physiological parameters, a smart mobile device collects and analyses the sensor data, a Medical Center system receives notifications on the detected health condition, and a Health Professional Platform is used by formal caregivers in order to review the patient condition and configure monitoring schemas. A Service-oriented architecture is utilised to provide extensible functional components and interoperable interactions among the diversified system components. The framework was applied within the REMOTE ambient-assisted living project in which a prototype system was developed, utilising Bluetooth to communicate with the sensors and Web services for data exchange. A scenario of using the REMOTE system and preliminary usability results show the applicability, usefulness and virtue of our approach. PMID:27733920
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2008-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2009-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Response capabilities of the National Guard: a focus on domestic disaster medical response.
Bochicchio, Daniel
2010-01-01
The National Guard has a 373-year history of responding to the nation's call to duty for service both at home and abroad (The National Guard Bureau Web site: Available at http://www.ngb.army.mil/default. aspx.). The National Guard (NG) is a constitutionally unique organization (United States Constitution, US Government Printing Office Web site: Available at http://www.gpoaccess.gov/constitution/index.html.). Today's Guard conducts domestic disaster response and civilian assistance missions on a daily basis. Yet, the NG's role, mission, and capabilities are not well-known or understood. The National Response Framework (NRF) places significant responsibility on the local and state disaster planners (Department of Homeland Security: National Response Framework. US Department of Homeland Security, Washington, DC, January 2008). The public health professionals are an integral component of the disaster planning community. It is critical that the public health community be knowledgeable of types and capabilities of all the response assets at their disposal.
A Survey of Complex Object Technologies for Digital Libraries
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Argue, Brad; Efron, Miles; Denn, Sheila; Pattuelli, Maria Cristina
2001-01-01
Many early web-based digital libraries (DLs) had implicit assumptions reflected in their architecture that the unit of focus in the DL (frequently "reports" or "e-prints") would only be manifested in a single, or at most a few, common file formats such as PDF or PostScript. DLs have now matured to the point where their contents are commonly no longer simple files. Complex objects in DLs have emerged from in response to various requirements, including: simple aggregation of formats and supporting files, bundling additional information to aid digital preservation, creating opaque digital objects for e-commerce applications, and the incorporation of dynamic services with the traditional data files. We examine a representative (but not necessarily exhaustive) number of current and recent historical web-based complex object technologies and projects that are applicable to DLs: Aurora, Buckets, ComMentor, Cryptolopes, Digibox, Document Management Alliance, FEDORA, Kahn-Wilensky Framework Digital Objects, Metadata Encoding & Transmission Standard, Multivalent Documents, Open eBooks, VERS Encapsulated Objects, and the Warwick Framework.
AAVSO Target Tool: A Web-Based Service for Tracking Variable Star Observations (Abstract)
NASA Astrophysics Data System (ADS)
Burger, D.; Stassun, K. G.; Barnes, C.; Kafka, S.; Beck, S.; Li, K.
2018-06-01
(Abstract only) The AAVSO Target Tool is a web-based interface for bringing stars in need of observation to the attention of AAVSOÃs network of amateur and professional astronomers. The site currently tracks over 700 targets of interest, collecting data from them on a regular basis from AAVSOÃs servers and sorting them based on priority. While the target tool does not require a login, users can obtain visibility times for each target by signing up and entering a telescope location. Other key features of the site include filtering by AAVSO observing section, sorting by different variable types, formatting the data for printing, and exporting the data to a CSV file. The AAVSO Target Tool builds upon seven years of experience developing web applications for astronomical data analysis, most notably on Filtergraph (Burger, D., et al. 2013, Astronomical Data Analysis Software and Systems XXII, Astronomical Society of the Pacific, San Francisco, 399), and is built using the web2py web framework based on the python programming language. The target tool is available at http://filtergraph.com/aavso.
Providing Multi-Page Data Extraction Services with XWRAPComposer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ling; Zhang, Jianjun; Han, Wei
2008-04-30
Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Shaban-Nejad, Arash; Mamiya, Hiroshi; Riazanov, Alexandre; Forster, Alan J; Baker, Christopher J O; Tamblyn, Robyn; Buckeridge, David L
2016-01-01
We propose an integrated semantic web framework consisting of formal ontologies, web services, a reasoner and a rule engine that together recommend appropriate level of patient-care based on the defined semantic rules and guidelines. The classification of healthcare-associated infections within the HAIKU (Hospital Acquired Infections - Knowledge in Use) framework enables hospitals to consistently follow the standards along with their routine clinical practice and diagnosis coding to improve quality of care and patient safety. The HAI ontology (HAIO) groups over thousands of codes into a consistent hierarchy of concepts, along with relationships and axioms to capture knowledge on hospital-associated infections and complications with focus on the big four types, surgical site infections (SSIs), catheter-associated urinary tract infection (CAUTI); hospital-acquired pneumonia, and blood stream infection. By employing statistical inferencing in our study we use a set of heuristics to define the rule axioms to improve the SSI case detection. We also demonstrate how the occurrence of an SSI is identified using semantic e-triggers. The e-triggers will be used to improve our risk assessment of post-operative surgical site infections (SSIs) for patients undergoing certain type of surgeries (e.g., coronary artery bypass graft surgery (CABG)).
MedlinePlus Connect: Web Service
... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service ...
The European Location Framework - from National to European
NASA Astrophysics Data System (ADS)
Pauknerova, E.; Sidlichovsky, P.; Urbanas, S.; Med, M.
2016-06-01
The European Location Framework (ELF) means a technical infrastructure which will deliver authoritative, interoperable geospatial reference data from all over Europe for analysing and understanding information connected to places and features. The ELF has been developed and set up through the ELF Project, which has been realized by a consortium of partners (public, private and academic organisations) since March 2013. Their number increased from thirty to forty in the year 2016, together with a project extension from 36 to 44 months. The project is co-funded by the European Commission's Competitiveness and Innovation Framework Programme (CIP) and will end in October 2016. In broad terms, the ELF Project will deliver a unique gateway to the authoritative reference geospatial information for Europe (harmonised pan-European maps, geographic and land information) sourced from the National Mapping and Cadastral Authorities (NMCAs) around Europe and including transparent licensing. This will be provided as an online ELF web service that will deliver an up-to-date topographic base map and also as view & download services for access to the ELF datasets. To develop and build up the ELF, NMCAs are accompanied and collaborate with several research & academia institutes, a standardisation body, system integrators, software developers and application providers. The harmonisation is in progress developing and triggering a number of geo-tools like edge-matching, generalisation, transformation and others. ELF will provide also some centralised tools like Geo Locator for searching location based on geographical names, addresses and administrative units, and GeoProduct Finder for discovering the available web-services and licensing them. ELF combines national reference geo-information through the ELF platform. ELF web services will be offered to users and application developers through open source (OSKARI) and proprietary (ArcGIS Online) cloud platforms. Recently, 29 NMCAs plus the EuroGeographics - their pan-European umbrella association, contribute to the ELF through an enrichment of data coverage. As a result, over 20 European countries will be covered with the ELF topo Base Map in 2016. Most countries will contribute also with other harmonized thematic data for viewing or down-loading. To overcome the heterogeneity of data resources and diversity of languages in tens of European countries, ELF builds on the existing INSPIRE rules and its own coordination and interoperability measures. ELF realisation empowers the implementation of INSPIRE in Europe and it complements related activities of European NMCAs, e.g. Czech Office for Surveying, Mapping and Cadastre (CUZK), which provides a large portfolio of spatial data/services and contributes significantly to the NSDI of Czech Republic. CUZK is also responsible for the Base Register of Territorial Identification, Addresses and Real Estates (RUIAN) - an important pillar of Czech e-Government. CUZK became an early-bird in implementing INSPIRE and it provides to the ELF a number of compliant datasets and web services. CUZK and the Polish NMCA (GUGiK) collaborate in the Central-European ELF Pilot (cluster) and test various cross-border prototypes. The presentation combines the national and crossborder view and experiences of CUZK and the European perspective of EuroGeographics.
Focused Crawling of the Deep Web Using Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Liu, L; Critchlow, T
2004-06-21
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Web-based newborn screening system for metabolic diseases: machine learning versus clinicians.
Chen, Wei-Hsin; Hsieh, Sheau-Ling; Hsu, Kai-Ping; Chen, Han-Ping; Su, Xing-Yu; Tseng, Yi-Ju; Chien, Yin-Hsiu; Hwu, Wuh-Liang; Lai, Feipei
2013-05-23
A hospital information system (HIS) that integrates screening data and interpretation of the data is routinely requested by hospitals and parents. However, the accuracy of disease classification may be low because of the disease characteristics and the analytes used for classification. The objective of this study is to describe a system that enhanced the neonatal screening system of the Newborn Screening Center at the National Taiwan University Hospital. The system was designed and deployed according to a service-oriented architecture (SOA) framework under the Web services .NET environment. The system consists of sample collection, testing, diagnosis, evaluation, treatment, and follow-up services among collaborating hospitals. To improve the accuracy of newborn screening, machine learning and optimal feature selection mechanisms were investigated for screening newborns for inborn errors of metabolism. The framework of the Newborn Screening Hospital Information System (NSHIS) used the embedded Health Level Seven (HL7) standards for data exchanges among heterogeneous platforms integrated by Web services in the C# language. In this study, machine learning classification was used to predict phenylketonuria (PKU), hypermethioninemia, and 3-methylcrotonyl-CoA-carboxylase (3-MCC) deficiency. The classification methods used 347,312 newborn dried blood samples collected at the Center between 2006 and 2011. Of these, 220 newborns had values over the diagnostic cutoffs (positive cases) and 1557 had values that were over the screening cutoffs but did not meet the diagnostic cutoffs (suspected cases). The original 35 analytes and the manifested features were ranked based on F score, then combinations of the top 20 ranked features were selected as input features to support vector machine (SVM) classifiers to obtain optimal feature sets. These feature sets were tested using 5-fold cross-validation and optimal models were generated. The datasets collected in year 2011 were used as predicting cases. The feature selection strategies were implemented and the optimal markers for PKU, hypermethioninemia, and 3-MCC deficiency were obtained. The results of the machine learning approach were compared with the cutoff scheme. The number of the false positive cases were reduced from 21 to 2 for PKU, from 30 to 10 for hypermethioninemia, and 209 to 46 for 3-MCC deficiency. This SOA Web service-based newborn screening system can accelerate screening procedures effectively and efficiently. An SVM learning methodology for PKU, hypermethioninemia, and 3-MCC deficiency metabolic diseases classification, including optimal feature selection strategies, is presented. By adopting the results of this study, the number of suspected cases could be reduced dramatically.
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Enabler for the agile virtual enterprise
NASA Astrophysics Data System (ADS)
Fuerst, Karl; Schmidt, Thomas; Wippel, Gerald
2001-10-01
In this presentation, a new approach for a flexible low-cost Internet extended enterprise (project FLoCI-EE) will be presented. FLoCI-EE is a project in the fifth framework program of the European commission with 8 partners from 4 countries, which started in January 2001 and will be finished in December 2003. The main objective of FLoCI-EE is the development of a software prototype, which enables flexible enterprise cooperation with the aim to design, manufacture and sell products commonly, independent of enterprise borderlines. The needed IT-support includes functions of product data management (PDM), enterprise resource planning (ERP), supply chain management (SCM) and customer relationship management (CRM). Especially for small and medium sized enterprises, existing solutions are too expensive and inflexible to be of use under current turbulent market conditions. The second part of this paper covers the item Web Services, because in the role-specific support approach of FLoCI-EE, there are user- interface-components, which are tailored for specific roles in an enterprise. These components integrate automatically the services of the so-called basic-components, and the externally offered Web Services like UDDI.
Provenance-Based Approaches to Semantic Web Service Discovery and Usage
ERIC Educational Resources Information Center
Narock, Thomas William
2012-01-01
The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…
The Ecological Model Web Concept: A Consultative Infrastructure for Decision Makers and Researchers
NASA Astrophysics Data System (ADS)
Geller, G.; Nativi, S.
2011-12-01
Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers want better information about what these changes will be and how various resources will be affected, while researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Although there are many excellent models in ecology and related disciplines, there is only limited coordination among them, and accessible, openly shared models or model systems that can be consulted to gain insight on important ecological questions or assist with decision-making are rare. A "consultative infrastructure" that increased access to and sharing of models and model outputs would benefit decision makers, researchers, as well as modelers. Of course, envisioning such an ambitious system is much easier than building it, but several complementary approaches exist that could contribute. The one discussed here is called the Model Web. This is a concept for an open-ended system of interoperable computer models and databases based on making models and their outputs available as services ("model as a service"). Initially, it might consist of a core of several models from which it could grow gradually as new models or databases were added. However, a model web would not be a monolithic, rigidly planned and built system--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange standards. One difference from the WWW is that a model web is much harder to create, and has more pitfalls, and thus is a long term vision. However, technology, science, observations, and models have advanced enough so that parts of an ecological model web can be built and utilized now, forming a framework for gradual growth as well as a broadly accessible infrastructure. Ultimately, the value of a model web lies in the increase in access to and sharing of both models and model outputs. By lowering access barriers to models and their outputs there is less reinvention, more efficient use of resources, greater interaction among researchers and across disciplines, as well as other benefits. The growth of such a system of models fits well with the concept and architecture of the Global Earth Observing System of Systems (GEOSS) as well as the Semantic Web. And, while framed here in the context of ecological forecasting, the same concept can be applied to any discipline utilizing models.
NASA Astrophysics Data System (ADS)
Paulraj, D.; Swamynathan, S.; Madhaiyan, M.
2012-11-01
Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.
Automatic geospatial information Web service composition based on ontology interface matching
NASA Astrophysics Data System (ADS)
Xu, Xianbin; Wu, Qunyong; Wang, Qinmin
2008-10-01
With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.
NASA Astrophysics Data System (ADS)
Coppini, Giovanni; Marra, Palmalisa; Lecci, Rita; Pinardi, Nadia; Cretì, Sergio; Scalas, Mario; Tedesco, Luca; D'Anca, Alessandro; Fazioli, Leopoldo; Olita, Antonio; Turrisi, Giuseppe; Palazzo, Cosimo; Aloisio, Giovanni; Fiore, Sandro; Bonaduce, Antonio; Vittal Kumkar, Yogesh; Ciliberti, Stefania Angela; Federico, Ivan; Mannarini, Gianandrea; Agostini, Paola; Bonarelli, Roberto; Martinelli, Sara; Verri, Giorgia; Lusito, Letizia; Rollo, Davide; Cavallo, Arturo; Tumolo, Antonio; Monacizzo, Tony; Spagnulo, Marco; Sorgente, Rorberto; Cucco, Andrea; Quattrocchi, Giovanni; Tonani, Marina; Drudi, Massimiliano; Nassisi, Paola; Conte, Laura; Panzera, Laura; Navarra, Antonio; Negro, Giancarlo
2017-04-01
Reliable and timely information on the environmental conditions at sea is key to the safety of professional and recreational users as well as to the optimal execution of their activities. The possibility of users obtaining environmental information in due time and with adequate accuracy in the marine and coastal environment is defined as sea situational awareness (SSA). Without adequate information on the environmental meteorological and oceanographic conditions, users have a limited capacity to respond, which has led to loss of lives and to large environmental disasters with enormous consequent damage to the economy, society and ecosystems. Within the framework of the TESSA project, new SSA services for the Mediterranean Sea have been developed. In this paper we present SeaConditions, which is a web and mobile application for the provision of meteorological and oceanographic observation and forecasting products. Model forecasts and satellite products from operational services, such as ECMWF and CMEMS, can be visualized in SeaConditions. In addition, layers of information related to bathymetry, sea level and ocean-colour data (chl a and water transparency) are displayed. Ocean forecasts at high spatial resolutions are included in the version of SeaConditions presented here. SeaConditions provides a user-friendly experience with a fluid zoom capability, facilitating the appropriate display of data with different levels of detail. SeaConditions is a single point of access to interactive maps from different geophysical fields, providing high-quality information based on advanced oceanographic models. The SeaConditions services are available through both web and mobile applications. The web application is available at www.sea-conditions.com and is accessible and compatible with present-day browsers. Interoperability with GIS software is implemented. User feedback has been collected and taken into account in order to improve the service. The SeaConditions iOS and Android apps have been downloaded by more than 105 000 users to date (May 2016), and more than 100 000 users have visited the web version.
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj
2016-04-01
Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.
2011-08-01
chain framework and can fill in the gaps for timely delivery to warfighters in the field. To fill its customer requirements, USAMMC-SWA uses the...said. ADO is a Web-based ordering service run by DLA Troop Support. DLA partnered with the Army to develop ADO to replace uniforms for Soldiers...Major Otis Cuffee, DLA’s command senior enlisted leader, discussed the ADO program with other senior Army enlisted leaders. He said he got a lot of
A Framework for Open, Flexible and Distributed Learning.
ERIC Educational Resources Information Center
Khan, Badrul H.
Designing open, flexible distance learning systems on the World Wide Web requires thoughtful analysis and investigation combined with an understanding of both the Web's attributes and resources and the ways instructional design principles can be applied to tap the Web's potential. A framework for open, flexible, and distributed learning has been…
Graph-Based Semantic Web Service Composition for Healthcare Data Integration.
Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602
A Framework for Web Usage Mining in Electronic Government
NASA Astrophysics Data System (ADS)
Zhou, Ping; Le, Zhongjian
Web usage mining has been a major component of management strategy to enhance organizational analysis and decision. The literature on Web usage mining that deals with strategies and technologies for effectively employing Web usage mining is quite vast. In recent years, E-government has received much attention from researchers and practitioners. Huge amounts of user access data are produced in Electronic government Web site everyday. The role of these data in the success of government management cannot be overstated because they affect government analysis, prediction, strategies, tactical, operational planning and control. Web usage miming in E-government has an important role to play in setting government objectives, discovering citizen behavior, and determining future courses of actions. Web usage mining in E-government has not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of Web usage mining in E-government. Using the current literature, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Riposan, Adina; Taylor, Ian; Owens, David R; Rana, Omer; Conley, Edward C
2007-01-01
In this paper we present mechanisms for imaging and spectral data discovery, as applied to the early detection of pathologic mechanisms underlying diabetic retinopathy in research and clinical trial scenarios. We discuss the Alchemist framework, built using a generic peer-to-peer architecture, supporting distributed database queries and complex search algorithms based on workflow. The Alchemist is a domain-independent search mechanism that can be applied to search and data discovery scenarios in many areas. We illustrate Alchemist's ability to perform complex searches composed as a collection of peer-to-peer overlays, Grid-based services and workflows, e.g. applied to image and spectral data discovery, as applied to the early detection and prevention of retinal disease and investigational drug discovery. The Alchemist framework is built on top of decentralised technologies and uses industry standards such as Web services and SOAP for messaging.
Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition
Ha, Weitao; Zhang, Guojun
2013-01-01
With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431
Reliable execution based on CPN and skyline optimization for Web service composition.
Chen, Liping; Ha, Weitao; Zhang, Guojun
2013-01-01
With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.
NASA Astrophysics Data System (ADS)
Girvetz, E. H.; Zganjar, C.; Raber, G. T.; Hoekstra, J.; Lawler, J. J.; Kareiva, P.
2008-12-01
Now that there is overwhelming evidence of global climate change, scientists, managers and planners (i.e. practitioners) need to assess the potential impacts of climate change on particular ecological systems, within specific geographic areas, and at spatial scales they care about, in order to make better land management, planning, and policy decisions. Unfortunately, this application of climate science to real world decisions and planning has proceeded too slowly because we lack tools for translating cutting-edge climate science and climate-model outputs into something managers and planners can work with at local or regional scales (CCSP 2008). To help increase the accessibility of climate information, we have developed a freely-available, easy-to-use, web-based climate-change analysis toolbox, called ClimateWizard, for assessing how climate has and is projected to change at specific geographic locations throughout the world. The ClimateWizard uses geographic information systems (GIS), web-services (SOAP/XML), statistical analysis platforms (e.g. R- project), and web-based mapping services (e.g. Google Earth/Maps, KML/GML) to provide a variety of different analyses (e.g. trends and departures) and outputs (e.g. maps, graphs, tables, GIS layers). Because ClimateWizard analyzes large climate datasets stored remotely on powerful computers, users of the tool do not need to have fast computers or expensive software, but simply need access to the internet. The analysis results are then provided to users in a Google Maps webpage tailored to the specific climate-change question being asked. The ClimateWizard is not a static product, but rather a framework to be built upon and modified to suit the purposes of specific scientific, management, and policy questions. For example, it can be expanded to include bioclimatic variables (e.g. evapotranspiration) and marine data (e.g. sea surface temperature), as well as improved future climate projections, and climate-change impact analyses involving hydrology, vegetation, wildfire, disease, and food security. By harnessing the power of computer and web- based technologies, the ClimateWizard puts local, regional, and global climate-change analyses in the hands of a wider array of managers, planners, and scientists.
Aerts, Jozef
2017-01-01
RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.
Semantic Service Design for Collaborative Business Processes in Internetworked Enterprises
NASA Astrophysics Data System (ADS)
Bianchini, Devis; Cappiello, Cinzia; de Antonellis, Valeria; Pernici, Barbara
Modern collaborating enterprises can be seen as borderless organizations whose processes are dynamically transformed and integrated with the ones of their partners (Internetworked Enterprises, IE), thus enabling the design of collaborative business processes. The adoption of Semantic Web and service-oriented technologies for implementing collaboration in such distributed and heterogeneous environments promises significant benefits. IE can model their own processes independently by using the Software as a Service paradigm (SaaS). Each enterprise maintains a catalog of available services and these can be shared across IE and reused to build up complex collaborative processes. Moreover, each enterprise can adopt its own terminology and concepts to describe business processes and component services. This brings requirements to manage semantic heterogeneity in process descriptions which are distributed across different enterprise systems. To enable effective service-based collaboration, IEs have to standardize their process descriptions and model them through component services using the same approach and principles. For enabling collaborative business processes across IE, services should be designed following an homogeneous approach, possibly maintaining a uniform level of granularity. In the paper we propose an ontology-based semantic modeling approach apt to enrich and reconcile semantics of process descriptions to facilitate process knowledge management and to enable semantic service design (by discovery, reuse and integration of process elements/constructs). The approach brings together Semantic Web technologies, techniques in process modeling, ontology building and semantic matching in order to provide a comprehensive semantic modeling framework.
Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William
2014-01-01
Background Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. Objective The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. Methods The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. Results The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. Conclusions This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases. PMID:24641991
Smits, Rochelle; Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William
2014-03-14
Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases.
Integrated brokering framework for multi-disciplinary research
NASA Astrophysics Data System (ADS)
Craglia, M.; Vaccari, L.; Santoro, M.; Nativi, S.
2012-04-01
EuroGEOSS is a research project supporting the development of the GEOSS Common Infrastructure and the multi-disciplinary research efforts needed to address global sustainability research. The framework it has developed to integrate data, services, and models from different disciplines is based on a brokering approach that advances the traditional Service Oriented Architecture of spatial data infrastructures. In this paper we demonstrate the added value of this approach. The scientific question we address in this example is o identify ecosystems similar to the ones found in the protected area of "Sierra De Queixa Montes De Invernadeiro Nature Park" in Galicia, Spain. To do this, the user would identify an analytical model suitable to address the question such as the eHabitat model developed by the European Commission Joint Research Centre. It would then use the EuroGEOSS broker to look for the data necessary to run the model such boundaries of the select park, mean drought index, %forest cover, temperature and rainfall, and elevation. The eHabitat model is used to compute the likelihood to find ecosystems in the selected window that are similar to the one found in the selected protected area. The end-user would therefore run the model directly on the web as a web processing service with the data identified in the broker (no need to download the data) and use this information to draw a new area to be protected that would have similar ecological conditions to the initial protected area and display the list of endangered species (to be defined) expected to be found in the new location. Finally, the user can also mine Web 2.0 social networks via the broker to identify pictures of the selected species in the area of interest. This integrated approach based on open standards and interfaces allows the user to find, access, and use the data and models in a transparent way focusing on the research questions to be addressed rather than the technology that delivers the inputs. It is therefore a true demonstration of a service underpinning multi-disciplinary research.
Ryberg, Martin; Kristiansson, Erik; Sjökvist, Elisabet; Nilsson, R Henrik
2009-01-01
The environmental and distributional data associated with fungal internal transcribed spacer (ITS) sequences in GenBank are investigated and a new web-based tool with which these sequences can be explored is introduced. All fungal ITS sequences in GenBank were classified as either identified to species level or insufficiently identified and compared using BLAST. The results are made available as a biweekly updated web service that can be queried to retrieve all insufficiently identified sequences (IIS) associated with any fungal genus. The most commonly available annotation items in GenBank are isolation source (55%); country of origin (50%); and specific host (38%). The molecular sampling of fungi shows a bias towards North America, Europe, China, and Japan whereas vast geographical areas remain effectively unexplored. Mycorrhizal and parasitic genera are on average associated with more IIS than are saprophytic taxa. Glomus, Alternaria, and Tomentella are the genera represented by the highest number of insufficiently identified ITS sequences in GenBank. The web service presented (http://andromeda.botany.gu.se/emerencia.html#genus_search) offers new means, particularly for mycorrhizal and plant pathogenic fungi, to examine the IIS in GenBank in a taxon-oriented framework and to explore their metadata in an easily accessible and time-efficient manner.
Web service module for access to g-Lite
NASA Astrophysics Data System (ADS)
Goranova, R.; Goranov, G.
2012-10-01
G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.
A PageRank-based reputation model for personalised manufacturing service recommendation
NASA Astrophysics Data System (ADS)
Zhang, W. Y.; Zhang, S.; Guo, S. S.
2017-05-01
The number of manufacturing services for cross-enterprise business collaborations is increasing rapidly because of the explosive growth of Web service technologies. This trend demands intelligent and robust models to address information overload in order to enable efficient discovery of manufacturing services. In this paper, we present a personalised manufacturing service recommendation approach, which combines a PageRank-based reputation model and a collaborative filtering technique in a unified framework for recommending the right manufacturing services to an active service user for supply chain deployment. The novel aspect of this research is adapting the PageRank algorithm to a network of service-oriented multi-echelon supply chain in order to determine both user reputation and service reputation. In addition, it explores the use of these methods in alleviating data sparsity and cold start problems that hinder traditional collaborative filtering techniques. A case study is conducted to validate the practicality and effectiveness of the proposed approach in recommending the right manufacturing services to active service users.
BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
2015-06-02
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
Web Services--A Buzz Word with Potentials
János T. Füstös
2006-01-01
The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...
NASA Astrophysics Data System (ADS)
Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong
2017-09-01
To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.
Collaborative development of predictive toxicology applications
2010-01-01
OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way. PMID:20807436
Collaborative development of predictive toxicology applications.
Hardy, Barry; Douglas, Nicki; Helma, Christoph; Rautenberg, Micha; Jeliazkova, Nina; Jeliazkov, Vedrin; Nikolova, Ivelina; Benigni, Romualdo; Tcheremenskaia, Olga; Kramer, Stefan; Girschick, Tobias; Buchwald, Fabian; Wicker, Joerg; Karwath, Andreas; Gütlein, Martin; Maunz, Andreas; Sarimveis, Haralambos; Melagraki, Georgia; Afantitis, Antreas; Sopasakis, Pantelis; Gallagher, David; Poroikov, Vladimir; Filimonov, Dmitry; Zakharov, Alexey; Lagunin, Alexey; Gloriozova, Tatyana; Novikov, Sergey; Skvortsova, Natalia; Druzhilovsky, Dmitry; Chawla, Sunil; Ghosh, Indira; Ray, Surajit; Patel, Hitesh; Escher, Sylvia
2010-08-31
OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.
Potentials of Web 2.0 for Diabetes Education of Adolescent Patients
NASA Astrophysics Data System (ADS)
Shabestari, Omid; Roudsari, Abdul
Diabetes is a very common chronic disease which produces compli-cations in almost all body organs and consumes a huge amount of the health budget. Although education has proved to be useful in diabetes management, there is a great need to improve the availability of these courses for the increasing number of diabetic patients. E-learning can facilitate this service, but the current education system should be tailored towards e-learning standards. Amongst diabetic patients, adolescents as computer natives are suggested as the best target to e-learning diabetes education. With regards to its features, Web 2.0 can be a very good technology to build a framework for diabetes education and consequent evaluation of this education.
Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service
Hatano, Kenji; Ohe, Kazuhiko
2003-01-01
Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364
Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha
2013-01-01
This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.
AliEn—ALICE environment on the GRID
NASA Astrophysics Data System (ADS)
Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration
2003-04-01
AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.
Semantics of data and service registration to advance interdisciplinary information and data access.
NASA Astrophysics Data System (ADS)
Fox, P. P.; McGuinness, D. L.; Raskin, R.; Sinha, A. K.
2008-12-01
In developing an application of semantic web methods and technologies to address the integration of heterogeneous and interdisciplinary earth-science datasets, we have developed methodologies for creating rich semantic descriptions (ontologies) of the application domains. We have leveraged and extended where possible existing ontology frameworks such as SWEET. As a result of this semantic approach, we have also utilized ontologic descriptions of key enabling elements of the application, such as the registration of datasets with ontologies at several levels of granularity. This has enabled the location and usage of the data across disciplines. We are also realizing the need to develop similar semantic registration of web service data holdings as well as those provided with community and/or standard markup languages (e.g. GeoSciML). This level of semantic enablement extending beyond domain terms and relations significantly enhances our ability to provide a coherent semantic data framework for data and information systems. Much of this work is on the frontier of technology development and we will present the current and near-future capabilities we are developing. This work arises from the Semantically-Enabled Science Data Integration (SESDI) project, which is an NASA/ESTO/ACCESS-funded project involving the High Altitude Observatory at the National Center for Atmospheric Research (NCAR), McGuinness Associates Consulting, NASA/JPL and Virginia Polytechnic University.
Zhang, Yi-Fan; Gou, Ling; Tian, Yu; Li, Tian-Chang; Zhang, Mao; Li, Jing-Song
2016-05-01
Clinical decision support (CDS) systems provide clinicians and other health care stakeholders with patient-specific assessments or recommendations to aid in the clinical decision-making process. Despite their demonstrated potential for improving health care quality, the widespread availability of CDS systems has been limited mainly by the difficulty and cost of sharing CDS knowledge among heterogeneous healthcare information systems. The purpose of this study was to design and develop a sharable clinical decision support (S-CDS) system that meets this challenge. The fundamental knowledge base consists of independent and reusable knowledge modules (KMs) to meet core CDS needs, wherein each KM is semantically well defined based on the standard information model, terminologies, and representation formalisms. A semantic web service framework was developed to identify, access, and leverage these KMs across diverse CDS applications and care settings. The S-CDS system has been validated in two distinct client CDS applications. Model-level evaluation results confirmed coherent knowledge representation. Application-level evaluation results reached an overall accuracy of 98.66 % and a completeness of 96.98 %. The evaluation results demonstrated the technical feasibility and application prospect of our approach. Compared with other CDS engineering efforts, our approach facilitates system development and implementation and improves system maintainability, scalability and efficiency, which contribute to the widespread adoption of effective CDS within the healthcare domain.
Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C
2010-12-01
The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.
Panacea, a semantic-enabled drug recommendations discovery framework.
Doulaverakis, Charalampos; Nikolaidis, George; Kleontas, Athanasios; Kompatsiaris, Ioannis
2014-03-06
Personalized drug prescription can be benefited from the use of intelligent information management and sharing. International standard classifications and terminologies have been developed in order to provide unique and unambiguous information representation. Such standards can be used as the basis of automated decision support systems for providing drug-drug and drug-disease interaction discovery. Additionally, Semantic Web technologies have been proposed in earlier works, in order to support such systems. The paper presents Panacea, a semantic framework capable of offering drug-drug and drug-diseases interaction discovery. For enabling this kind of service, medical information and terminology had to be translated to ontological terms and be appropriately coupled with medical knowledge of the field. International standard classifications and terminologies, provide the backbone of the common representation of medical data while the medical knowledge of drug interactions is represented by a rule base which makes use of the aforementioned standards. Representation is based on a lightweight ontology. A layered reasoning approach is implemented where at the first layer ontological inference is used in order to discover underlying knowledge, while at the second layer a two-step rule selection strategy is followed resulting in a computationally efficient reasoning approach. Details of the system architecture are presented while also giving an outline of the difficulties that had to be overcome. Panacea is evaluated both in terms of quality of recommendations against real clinical data and performance. The quality recommendation gave useful insights regarding requirements for real world deployment and revealed several parameters that affected the recommendation results. Performance-wise, Panacea is compared to a previous published work by the authors, a service for drug recommendations named GalenOWL, and presents their differences in modeling and approach to the problem, while also pinpointing the advantages of Panacea. Overall, the paper presents a framework for providing an efficient drug recommendations service where Semantic Web technologies are coupled with traditional business rule engines.
Shen, Nelson; Yufe, Shira; Saadatfard, Omid; Sockalingam, Sanjeev; Wiljer, David
2017-01-01
Information system research has stressed the importance of theory in understanding how user perceptions can motivate the use and adoption of technology such as web-based continuing professional development programs for interprofessional education (WCPD-IPE). A systematic review was conducted to provide an information system perspective on the current state of WCPD-IPE program evaluation and how current evaluations capture essential theoretical constructs in promoting technology adoption. Six databases were searched to identify studies evaluating WCPD-IPE. Three investigators determined eligibility of the articles. Evaluation items extracted from the studies were assessed using the Kirkpatrick-Barr framework and mapped to the Benefits Evaluation Framework. Thirty-seven eligible studies yielded 362 evaluation items for analysis. Most items (n = 252) were assessed as Kirkpatrick-Barr level 1 (reaction) and were mainly focused on the quality (information, service, and quality) and satisfaction dimensions of the Benefits Evaluation. System quality was the least evaluated quality dimension, accounting for 26 items across 13 studies. WCPD-IPE use was reported in 17 studies and its antecedent factors were evaluated in varying degrees of comprehensiveness. Although user reactions were commonly evaluated, greater focus on user perceptions of system quality (ie, functionality and performance), usefulness, and usability of the web-based platform is required. Surprisingly, WCPD-IPE use was reported in less than half of the studies. This is problematic as use is a prerequisite to realizing any individual, organizational, or societal benefit of WCPD-IPE. This review proposes an integrated framework which accounts for these factors and provides a theoretically grounded guide for future evaluations.
Persistence and availability of Web services in computational biology.
Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar
2011-01-01
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.
Persistence and Availability of Web Services in Computational Biology
Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar
2011-01-01
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383
NASA Astrophysics Data System (ADS)
Khalilian, Madjid; Boroujeni, Farsad Zamani; Mustapha, Norwati
Nowadays the growth of the web causes some difficulties to search and browse useful information especially in specific domains. However, some portion of the web remains largely underdeveloped, as shown in lack of high quality contents. An example is the botany specific web directory, in which lack of well-structured web directories have limited user's ability to browse required information. In this research we propose an improved framework for constructing a specific web directory. In this framework we use an anchor directory as a foundation for primary web directory. This web directory is completed by information which is gathered with automatic component and filtered by experts. We conduct an experiment for evaluating effectiveness, efficiency and satisfaction.
UUI: Reusable Spatial Data Services in Unified User Interface at NASA GES DISC
NASA Technical Reports Server (NTRS)
Petrenko, Maksym; Hegde, Mahabaleshwa; Bryant, Keith; Pham, Long B.
2016-01-01
Unified User Interface (UUI) is a next-generation operational data access tool that has been developed at Goddard Earth Sciences Data and Information Services Center(GES DISC) to provide a simple, unified, and intuitive one-stop shop experience for the key data services available at GES DISC, including subsetting (Simple Subset Wizard -SSW), granule file search (Mirador), plotting (Giovanni), and other legacy spatial data services. UUI has been built based on a flexible infrastructure of reusable web services self-contained building blocks that can easily be plugged into spatial applications, including third-party clients or services, to easily enable new functionality as new datasets and services become available. In this presentation, we will discuss our experience in designing UUI services based on open industry standards. We will also explain how the resulting framework can be used for a rapid development, deployment, and integration of spatial data services, facilitating efficient access and dissemination of spatial data sets.
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
The EMBRACE web service collection
Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert
2010-01-01
The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862
Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data
NASA Astrophysics Data System (ADS)
Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.
2013-12-01
Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.
A re-conceptualization of access for 21st century healthcare.
Fortney, John C; Burgess, James F; Bosworth, Hayden B; Booth, Brenda M; Kaboli, Peter J
2011-11-01
Many e-health technologies are available to promote virtual patient-provider communication outside the context of face-to-face clinical encounters. Current digital communication modalities include cell phones, smartphones, interactive voice response, text messages, e-mails, clinic-based interactive video, home-based web-cams, mobile smartphone two-way cameras, personal monitoring devices, kiosks, dashboards, personal health records, web-based portals, social networking sites, secure chat rooms, and on-line forums. Improvements in digital access could drastically diminish the geographical, temporal, and cultural access problems faced by many patients. Conversely, a growing digital divide could create greater access disparities for some populations. As the paradigm of healthcare delivery evolves towards greater reliance on non-encounter-based digital communications between patients and their care teams, it is critical that our theoretical conceptualization of access undergoes a concurrent paradigm shift to make it more relevant for the digital age. The traditional conceptualizations and indicators of access are not well adapted to measure access to health services that are delivered digitally outside the context of face-to-face encounters with providers. This paper provides an overview of digital "encounterless" utilization, discusses the weaknesses of traditional conceptual frameworks of access, presents a new access framework, provides recommendations for how to measure access in the new framework, and discusses future directions for research on access.
Web-Based Newborn Screening System for Metabolic Diseases: Machine Learning Versus Clinicians
Chen, Wei-Hsin; Hsu, Kai-Ping; Chen, Han-Ping; Su, Xing-Yu; Tseng, Yi-Ju; Chien, Yin-Hsiu; Hwu, Wuh-Liang; Lai, Feipei
2013-01-01
Background A hospital information system (HIS) that integrates screening data and interpretation of the data is routinely requested by hospitals and parents. However, the accuracy of disease classification may be low because of the disease characteristics and the analytes used for classification. Objective The objective of this study is to describe a system that enhanced the neonatal screening system of the Newborn Screening Center at the National Taiwan University Hospital. The system was designed and deployed according to a service-oriented architecture (SOA) framework under the Web services .NET environment. The system consists of sample collection, testing, diagnosis, evaluation, treatment, and follow-up services among collaborating hospitals. To improve the accuracy of newborn screening, machine learning and optimal feature selection mechanisms were investigated for screening newborns for inborn errors of metabolism. Methods The framework of the Newborn Screening Hospital Information System (NSHIS) used the embedded Health Level Seven (HL7) standards for data exchanges among heterogeneous platforms integrated by Web services in the C# language. In this study, machine learning classification was used to predict phenylketonuria (PKU), hypermethioninemia, and 3-methylcrotonyl-CoA-carboxylase (3-MCC) deficiency. The classification methods used 347,312 newborn dried blood samples collected at the Center between 2006 and 2011. Of these, 220 newborns had values over the diagnostic cutoffs (positive cases) and 1557 had values that were over the screening cutoffs but did not meet the diagnostic cutoffs (suspected cases). The original 35 analytes and the manifested features were ranked based on F score, then combinations of the top 20 ranked features were selected as input features to support vector machine (SVM) classifiers to obtain optimal feature sets. These feature sets were tested using 5-fold cross-validation and optimal models were generated. The datasets collected in year 2011 were used as predicting cases. Results The feature selection strategies were implemented and the optimal markers for PKU, hypermethioninemia, and 3-MCC deficiency were obtained. The results of the machine learning approach were compared with the cutoff scheme. The number of the false positive cases were reduced from 21 to 2 for PKU, from 30 to 10 for hypermethioninemia, and 209 to 46 for 3-MCC deficiency. Conclusions This SOA Web service–based newborn screening system can accelerate screening procedures effectively and efficiently. An SVM learning methodology for PKU, hypermethioninemia, and 3-MCC deficiency metabolic diseases classification, including optimal feature selection strategies, is presented. By adopting the results of this study, the number of suspected cases could be reduced dramatically. PMID:23702487
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2009-04-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user`s own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2008-12-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology- specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user's own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
Enhancing UCSF Chimera through web services
Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.
2014-01-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624
A Tsunami-Focused Tide Station Data Sharing Framework
NASA Astrophysics Data System (ADS)
Kari, U. S.; Marra, J. J.; Weinstein, S. A.
2006-12-01
The Indian Ocean Tsunami of 26 December 2004 made it clear that information about tide stations that could be used to support detection and warning (such as location, collection and transmission capabilities, operator identification) are insufficiently known or not readily accessible. Parties interested in addressing this problem united under the Pacific Region Data Integrated Data Enterprise (PRIDE), and in 2005 began a multiyear effort to develop a distributed metadata system describing tide stations starting with pilot activities in a regional framework and focusing on tsunami detection and warning systems being developed by various agencies. First, a plain semantic description of the tsunami-focused tide station metadata was developed. The semantic metadata description was, in turn, developed into a formal metadata schema championed by International Tsunami Information Centre (ITIC) as part of a larger effort to develop a prototype web service under the PRIDE program in 2005. Under the 2006 PRIDE program the formal metadata schema was then expanded to corral input parameters for the TideTool application used by Pacific Tsunami Warning Center (PTWC) to drill down into wave activity at a tide station that is located using a web service developed on this metadata schema. This effort contributed to formalization of web service dissemination of PTWC watch and warning tsunami bulletins. During this time, the data content and sharing issues embodied in this schema have been discussed at various forums. The result is that the various stakeholders have different data provider and user perspectives (semantic content) and also exchange formats (not limited to just XML). The challenge then, is not only to capture all data requirements, but also to have formal representation that is easily transformed into any specified format. The latest revision of the tide gauge schema (Version 0.3), begins to address this challenge. It encompasses a broader range of provider and user perspectives, such as station operators, warning system managers, disaster managers, other marine hazard warning systems (such as storm surges and sea level change monitoring and research. In the next revision(s), we hope to take into account various relevant standards, including specifically, the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Framework, that will serve all prospective stakeholders in the most useful (extensible, scalable) manner. This is because Sensor ML has addressed many of the challenges we face already, through very useful fundamental modeling consideration and data types that are particular to sensors in general, with perhaps some extension needed for tide gauges. As a result of developing this schema, and associated client application architectures, we hope to have a much more distributed network of data providers, who are able to contribute to a global tide station metadata from the comfort of their own Information Technology (IT) departments.
NASA Astrophysics Data System (ADS)
Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.
2006-12-01
The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.
A Methodology and a Web Platform for the Collaborative Development of Context-Aware Systems
Martín, David; López-de-Ipiña, Diego; Alzua-Sorzabal, Aurkene; Lamsfus, Carlos; Torres-Manzanera, Emilio
2013-01-01
Information and services personalization is essential for an optimal user experience. Systems have to be able to acquire data about the user's context, process them in order to identify the user's situation and finally, adapt the functionality of the system to that situation, but the development of context-aware systems is complex. Data coming from distributed and heterogeneous sources have to be acquired, processed and managed. Several programming frameworks have been proposed in order to simplify the development of context-aware systems. These frameworks offer high-level application programming interfaces for programmers that complicate the involvement of domain experts in the development life-cycle. The participation of users that do not have programming skills but are experts in the application domain can speed up and improve the development process of these kinds of systems. Apart from that, there is a lack of methodologies to guide the development process. This article presents as main contributions, the implementation and evaluation of a web platform and a methodology to collaboratively develop context-aware systems by programmers and domain experts. PMID:23666131
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-04-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-03-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Similarity Based Semantic Web Service Match
NASA Astrophysics Data System (ADS)
Peng, Hui; Niu, Wenjia; Huang, Ronghuai
Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.
Busolo, David S; Woodgate, Roberta L
2016-06-01
Cancer incidence and mortality are increasing in Africa, which is leading to greater demands for palliative care. There has been little progress in terms of research, pain management, and policies related to palliative care. Palliative care in Africa is scarce and scattered, with most African nations lacking the basic services. To address these needs, a guiding framework that identifies care needs and directs palliative care services could be utilized. Therefore, using the supportive care framework developed by Fitch (Fitch, 2009), we here review the literature on palliative care for patients diagnosed with cancer in Africa and make recommendations for improvement. The PubMed, Scopus, CINAHL, Web of Science, Embase, PsycINFO, Social Sciences Citation Index, and Medline databases were searched. Some 25 English articles on research from African countries published between 2004 and 2014 were selected and reviewed. The reviewed literature was analyzed and presented using the domains of the supportive care framework. Palliative care patients with cancer in Africa, their families, and caregivers experience increasing psychological, physical, social, spiritual, emotional, informational, and practical needs. Care needs are often inadequately addressed because of a lack of awareness as well as deficient and scattered palliative care services and resources. In addition, there is sparse research, education, and policies that address the dire situation in palliative care. Our review findings add to the existing body of knowledge demonstrating that palliative care patients with cancer in Africa experience disturbing care needs in all domains of the supportive care framework. To better assess and address these needs, holistic palliative care that is multidomain and multi-professional could be utilized. This approach needs to be individualized and to offer better access to services and information. In addition, research, education, and policies around palliative care for cancer patients in Africa could be more comprehensive if they were based on the domains of the supportive care framework.
Boverhof's App Earns Honorable Mention in Amazon's Web Services
» Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday
Biological Web Service Repositories Review
Urdidiales‐Nieto, David; Navas‐Delgado, Ismael
2016-01-01
Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459
Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration
NASA Technical Reports Server (NTRS)
Lin, Risheng; Afjeh, Abdollah A.
2003-01-01
This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.
A new service-oriented grid-based method for AIoT application and implementation
NASA Astrophysics Data System (ADS)
Zou, Yiqin; Quan, Li
2017-07-01
The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.
A service brokering and recommendation mechanism for better selecting cloud services.
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).
Design-Grounded Assessment: A Framework and a Case Study of Web 2.0 Practices in Higher Education
ERIC Educational Resources Information Center
Ching, Yu-Hui; Hsu, Yu-Chang
2011-01-01
This paper synthesis's three theoretical perspectives, including sociocultural theory, distributed cognition, and situated cognition, into a framework to guide the design and assessment of Web 2.0 practices in higher education. In addition, this paper presents a case study of Web 2.0 practices. Thirty-seven online graduate students participated in…
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
The impact of web services at the IRIS DMC
NASA Astrophysics Data System (ADS)
Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.
2015-12-01
The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability will leverage the DMC's MUSTANG project to select data based on data quality measurements. Within five years, the DMC's web services have proven to be a robust and flexible platform that enables continued growth for the DMC. We expect continued enhancements and adoption of web services.
NASA Technical Reports Server (NTRS)
Patterson, Maria T.; Anderson, Nicholas; Bennett, Collin; Bruggemann, Jacob; Grossman, Robert L.; Handy, Matthew; Ly, Vuong; Mandl, Daniel J.; Pederson, Shane; Pivarski, James;
2016-01-01
Project Matsu is a collaboration between the Open Commons Consortium and NASA focused on developing open source technology for cloud-based processing of Earth satellite imagery with practical applications to aid in natural disaster detection and relief. Project Matsu has developed an open source cloud-based infrastructure to process, analyze, and reanalyze large collections of hyperspectral satellite image data using OpenStack, Hadoop, MapReduce and related technologies. We describe a framework for efficient analysis of large amounts of data called the Matsu "Wheel." The Matsu Wheel is currently used to process incoming hyperspectral satellite data produced daily by NASA's Earth Observing-1 (EO-1) satellite. The framework allows batches of analytics, scanning for new data, to be applied to data as it flows in. In the Matsu Wheel, the data only need to be accessed and preprocessed once, regardless of the number or types of analytics, which can easily be slotted into the existing framework. The Matsu Wheel system provides a significantly more efficient use of computational resources over alternative methods when the data are large, have high-volume throughput, may require heavy preprocessing, and are typically used for many types of analysis. We also describe our preliminary Wheel analytics, including an anomaly detector for rare spectral signatures or thermal anomalies in hyperspectral data and a land cover classifier that can be used for water and flood detection. Each of these analytics can generate visual reports accessible via the web for the public and interested decision makers. The result products of the analytics are also made accessible through an Open Geospatial Compliant (OGC)-compliant Web Map Service (WMS) for further distribution. The Matsu Wheel allows many shared data services to be performed together to efficiently use resources for processing hyperspectral satellite image data and other, e.g., large environmental datasets that may be analyzed for many purposes.
A transmission security framework for email-based telemedicine.
Caffery, Liam J; Smith, Anthony C
2010-01-01
Encryption is used to convert an email message to an unreadable format thereby securing patient privacy during the transmission of the message across the Internet. Two available means of encryption are: public key infrastructure (PKI) used in conjunction with ordinary email and secure hypertext transfer protocol (HTTPS) used by secure web-mail applications. Both of these approaches have advantages and disadvantages in terms of viability, cost, usability and compliance. The aim of this study was develop an instrument to identify the most appropriate means of encrypting email communication for telemedicine. A multi-method approach was used to construct the instrument. Technical assessment and existing bodies of knowledge regarding the utility of PKI were analyzed, along with survey results from users of Queensland Health's Child and Youth Mental Health Service secure web-mail service. The resultant decision support model identified that the following conditions affect the choice of encryption technology: correspondent's risk perception, correspondent's identification to the security afforded by encryption, email-client used by correspondents, the tolerance to human error and the availability of technical resources. A decision support model is presented as a flow chart to identify the most appropriate encryption for a specific email-based telemedicine service.
NASA Astrophysics Data System (ADS)
Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.
2009-12-01
This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.
Web service discovery among large service pools utilising semantic similarity and clustering
NASA Astrophysics Data System (ADS)
Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli
2017-03-01
With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.
A verification strategy for web services composition using enhanced stacked automata model.
Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali
2015-01-01
Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.
Pragmatic Computing - A Semiotic Perspective to Web Services
NASA Astrophysics Data System (ADS)
Liu, Kecheng
The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.
A Geospatial Information Grid Framework for Geological Survey.
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.
A Geospatial Information Grid Framework for Geological Survey
Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong
2015-01-01
The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255
Providing Geographic Datasets as Linked Data in Sdi
NASA Astrophysics Data System (ADS)
Hietanen, E.; Lehto, L.; Latvala, P.
2016-06-01
In this study, a prototype service to provide data from Web Feature Service (WFS) as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI) are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF) data format. Next, a Web Ontology Language (OWL) ontology is created to describe the dataset information content using the Open Geospatial Consortium's (OGC) GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML) format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID). The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.
Land User and Land Cover Maps of Europe: a Webgis Platform
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Fahl, F. C.; Minghini, M.; Molinari, M. E.
2016-06-01
This paper presents the methods and implementation processes of a WebGIS platform designed to publish the available land use and land cover maps of Europe at continental scale. The system is built completely on open source infrastructure and open standards. The proposed architecture is based on a server-client model having GeoServer as the map server, Leaflet as the client-side mapping library and the Bootstrap framework at the core of the front-end user interface. The web user interface is designed to have typical features of a desktop GIS (e.g. activate/deactivate layers and order layers by drag and drop actions) and to show specific information on the activated layers (e.g. legend and simplified metadata). Users have the possibility to change the base map from a given list of map providers (e.g. OpenStreetMap and Microsoft Bing) and to control the opacity of each layer to facilitate the comparison with both other land cover layers and the underlying base map. In addition, users can add to the platform any custom layer available through a Web Map Service (WMS) and activate the visualization of photos from popular photo sharing services. This last functionality is provided in order to have a visual assessment of the available land coverages based on other user-generated contents available on the Internet. It is supposed to be a first step towards a calibration/validation service that will be made available in the future.
CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 9, September 2007
2007-09-01
underlying application framework, e.g., Java Enter- prise Edition or .NET. This increases the risk that consumer Web services not based on the same...weaknesses and vulnera- bilities that are targeted by attackers and malicious code. For example, Apache Axis 2 enables a Java devel- oper to simply...load his/her Java objects into the Axis SOAP engine. At runtime, it is the SOAP engine that determines which incoming SOAP request messages should be
NASA Technical Reports Server (NTRS)
Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
Data privacy preservation in telemedicine: the PAIRSE project.
Nageba, Ebrahim; Defude, Bruno; Morvan, Franck; Ghedira, Chirine; Fayn, Jocelyne
2011-01-01
The preservation of medical data privacy and confidentiality is a major challenge in eHealth systems and applications. A technological solution based on advanced information and communication systems architectures is needed in order to retrieve and exchange the patient's data in a secure and reliable manner. In this paper, we introduce the project PAIRSE, Preserving Privacy in Peer to Peer (P2P) environments, which proposes an original web service oriented framework preserving the privacy and confidentiality of shared or exchanged medical data.
Neuhaus, Philipp; Doods, Justin; Dugas, Martin
2015-01-01
Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.
Service-oriented infrastructure for scientific data mashups
NASA Astrophysics Data System (ADS)
Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.
2009-12-01
An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling services based on standards such as WFS, WMS, WCS, and GeoSciML that allow integration of disparate data in a distributed, Web-based environment. Along these lines, we introduce the notion of a Web Volume Service (WVS) for modeling and manipulating 3D data. The Data Interaction Services provide services for rich interactions with the integrated 3D data. To provide efficient interactions with large-scale data in a distributed environment the architecture must include capabilities for caching and reuse of data, use of multi-level indexing, and the ability to orchestrate and coordinate execution of data processing and transformation routines as part of the data access and integration steps. The data mashup infrastructure is based on a service-oriented architecture. A range of alternatives are available for implementing these mashup services in a scalable fashion, using the cloud computing paradigm. We will describe the tradeoffs of each approach and provide an evaluation of which options are best suited to which types of services. We will describe security, privacy, performance, and price/performance issues and considerations in implementing services on dedicated servers versus private as well as public clouds, including systems such as Amazon Web Services.
Specification and Verification of Web Applications in Rewriting Logic
NASA Astrophysics Data System (ADS)
Alpuente, María; Ballis, Demis; Romero, Daniel
This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.
Climate change-contaminant interactions in marine food webs: Toward a conceptual framework.
Alava, Juan José; Cheung, William W L; Ross, Peter S; Sumaila, U Rashid
2017-10-01
Climate change is reshaping the way in which contaminants move through the global environment, in large part by changing the chemistry of the oceans and affecting the physiology, health, and feeding ecology of marine biota. Climate change-associated impacts on structure and function of marine food webs, with consequent changes in contaminant transport, fate, and effects, are likely to have significant repercussions to those human populations that rely on fisheries resources for food, recreation, or culture. Published studies on climate change-contaminant interactions with a focus on food web bioaccumulation were systematically reviewed to explore how climate change and ocean acidification may impact contaminant levels in marine food webs. We propose here a conceptual framework to illustrate the impacts of climate change on contaminant accumulation in marine food webs, as well as the downstream consequences for ecosystem goods and services. The potential impacts on social and economic security for coastal communities that depend on fisheries for food are discussed. Climate change-contaminant interactions may alter the bioaccumulation of two priority contaminant classes: the fat-soluble persistent organic pollutants (POPs), such as polychlorinated biphenyls (PCBs), as well as the protein-binding methylmercury (MeHg). These interactions include phenomena deemed to be either climate change dominant (i.e., climate change leads to an increase in contaminant exposure) or contaminant dominant (i.e., contamination leads to an increase in climate change susceptibility). We illustrate the pathways of climate change-contaminant interactions using case studies in the Northeastern Pacific Ocean. The important role of ecological and food web modeling to inform decision-making in managing ecological and human health risks of chemical pollutants contamination under climate change is also highlighted. Finally, we identify the need to develop integrated policies that manage the ecological and socioeconomic risk of greenhouse gases and marine pollutants. © 2017 John Wiley & Sons Ltd.
QoS measurement of workflow-based web service compositions using Colored Petri net.
Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra
2014-01-01
Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.
Enhancing UCSF Chimera through web services.
Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E
2014-07-01
Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Tactical Applications (TACAPPS) JavaScript Framework Investigation
2017-02-01
frameworks explored were Angular JavaScript (AngularJS), jQuery UI, Meteor, Ember, React JavaScript (ReactJS) and Web Components. The team evaluated the...10 Issues and Risks 11 Web Components 11 Benefits 13 Issues and Risks 13 Conclusions 14 Bibliography 15 Distribution List 19...3 Basic Flux flow 10 4 Shadow DOM tree hierarchy 12 5 Web Components browser support 13 UNCLASSIFIED Approved for
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
The MMI Semantic Framework: Rosetta Stones for Earth Sciences
NASA Astrophysics Data System (ADS)
Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.
2009-12-01
Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web services standards could be performed using framework tools.
PiCO QL: A software library for runtime interactive queries on program data
NASA Astrophysics Data System (ADS)
Fragkoulis, Marios; Spinellis, Diomidis; Louridas, Panos
PiCO QL is an open source C/C++ software whose scientific scope is real-time interactive analysis of in-memory data through SQL queries. It exposes a relational view of a system's or application's data structures, which is queryable through SQL. While the application or system is executing, users can input queries through a web-based interface or issue web service requests. Queries execute on the live data structures through the respective relational views. PiCO QL makes a good candidate for ad-hoc data analysis in applications and for diagnostics in systems settings. Applications of PiCO QL include the Linux kernel, the Valgrind instrumentation framework, a GIS application, a virtual real-time observatory of stellar objects, and a source code analyser.
Improving healthcare services using web based platform for management of medical case studies.
Ogescu, Cristina; Plaisanu, Claudiu; Udrescu, Florian; Dumitru, Silviu
2008-01-01
The paper presents a web based platform for management of medical cases, support for healthcare specialists in taking the best clinical decision. Research has been oriented mostly on multimedia data management, classification algorithms for querying, retrieving and processing different medical data types (text and images). The medical case studies can be accessed by healthcare specialists and by students as anonymous case studies providing trust and confidentiality in Internet virtual environment. The MIDAS platform develops an intelligent framework to manage sets of medical data (text, static or dynamic images), in order to optimize the diagnosis and the decision process, which will reduce the medical errors and will increase the quality of medical act. MIDAS is an integrated project working on medical information retrieval from heterogeneous, distributed medical multimedia database.
NASA Technical Reports Server (NTRS)
Hochstadt, Jake
2011-01-01
Ruby on Rails is an open source web application framework for the Ruby programming language. The first application I built was a web application to manage and authenticate other applications. One of the main requirements for this application was a single sign-on service. This allowed authentication to be built in one location and be implemented in many different applications. For example, users would be able to login using their existing credentials, and be able to access other NASA applications without authenticating again. The second application I worked on was an internal qualification plan app. Previously, the viewing of employee qualifications was managed through Excel spread sheets. I built a database driven application to streamline the process of managing qualifications. Employees would be able to login securely to view, edit and update their personal qualifications.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
Cross-Dataset Analysis and Visualization Driven by Expressive Web Services
NASA Astrophysics Data System (ADS)
Alexandru Dumitru, Mircea; Catalin Merticariu, Vlad
2015-04-01
The deluge of data that is hitting us every day from satellite and airborne sensors is changing the workflow of environmental data analysts and modelers. Web geo-services play now a fundamental role, and are no longer needed to preliminary download and store the data, but rather they interact in real-time with GIS applications. Due to the very large amount of data that is curated and made available by web services, it is crucial to deploy smart solutions for optimizing network bandwidth, reducing duplication of data and moving the processing closer to the data. In this context we have created a visualization application for analysis and cross-comparison of aerosol optical thickness datasets. The application aims to help researchers identify and visualize discrepancies between datasets coming from various sources, having different spatial and time resolutions. It also acts as a proof of concept for integration of OGC Web Services under a user-friendly interface that provides beautiful visualizations of the explored data. The tool was built on top of the World Wind engine, a Java based virtual globe built by NASA and the open source community. For data retrieval and processing we exploited the OGC Web Coverage Service potential: the most exciting aspect being its processing extension, a.k.a. the OGC Web Coverage Processing Service (WCPS) standard. A WCPS-compliant service allows a client to execute a processing query on any coverage offered by the server. By exploiting a full grammar, several different kinds of information can be retrieved from one or more datasets together: scalar condensers, cross-sectional profiles, comparison maps and plots, etc. This combination of technology made the application versatile and portable. As the processing is done on the server-side, we ensured that the minimal amount of data is transferred and that the processing is done on a fully-capable server, leaving the client hardware resources to be used for rendering the visualization. The application offers a set of features to visualize and cross-compare the datasets. Users can select a region of interest in space and time on which an aerosol map layer is plotted. Hovmoeller time-latitude and time-longitude profiles can be displayed by selecting orthogonal cross-sections on the globe. Statistics about the selected dataset are also displayed in different text and plot formats. The datasets can also be cross-compared either by using the delta map tool or the merged map tool. For more advanced users, a WCPS query console is also offered allowing users to process their data with ad-hoc queries and then choose how to display the results. Overall, the user has a rich set of tools that can be used to visualize and cross-compare the aerosol datasets. With our application we have shown how the NASA WorldWind framework can be used to display results processed efficiently - and entirely - on the server side using the expressiveness of the OGC WCPS web-service. The application serves not only as a proof of concept of a new paradigm in working with large geospatial data but also as an useful tool for environmental data analysts.
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Climate Model Diagnostic Analyzer Web Service System
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.
2013-12-01
The latest Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with newly available global observations. The traditional approach to climate model evaluation, which compares a single parameter at a time, identifies symptomatic model biases and errors but fails to diagnose the model problems. The model diagnosis process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. To address these challenges, we are developing a parallel, distributed web-service system that enables the physics-based multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation and (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, and (4) the calculation of difference between two variables. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA is planned to be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. The requirements of the educational tool are defined with the interaction with the school organizers, and CMDA is customized to meet the requirements accordingly. The tool needs to be production quality for 30+ simultaneous users. The summer school will thus serve as a valuable testbed for the tool development, preparing CMDA to serve the Earth-science modeling and model-analysis community at the end of the project. This work was funded by the NASA Earth Science Program called Computational Modeling Algorithms and Cyberinfrastructure (CMAC).
Osz, Ágnes; Pongor, Lorinc Sándor; Szirmai, Danuta; Gyorffy, Balázs
2017-12-08
The long-term availability of online Web services is of utmost importance to ensure reproducibility of analytical results. However, because of lack of maintenance following acceptance, many servers become unavailable after a short period of time. Our aim was to monitor the accessibility and the decay rate of published Web services as well as to determine the factors underlying trends changes. We searched PubMed to identify publications containing Web server-related terms published between 1994 and 2017. Automatic and manual screening was used to check the status of each Web service. Kruskall-Wallis, Mann-Whitney and Chi-square tests were used to evaluate various parameters, including availability, accessibility, platform, origin of authors, citation, journal impact factor and publication year. We identified 3649 publications in 375 journals of which 2522 (69%) were currently active. Over 95% of sites were running in the first 2 years, but this rate dropped to 84% in the third year and gradually sank afterwards (P < 1e-16). The mean half-life of Web services is 10.39 years. Working Web services were published in journals with higher impact factors (P = 4.8e-04). Services published before the year 2000 received minimal attention. The citation of offline services was less than for those online (P = 0.022). The majority of Web services provide analytical tools, and the proportion of databases is slowly decreasing. Conclusions. Almost one-third of Web services published to date went out of service. We recommend continued support of Web-based services to increase the reproducibility of published results. © The Author 2017. Published by Oxford University Press.
Page, William R.; Berry, Margaret E.; VanSistine, D. Paco; Snyders, Scott R.
2009-01-01
The purpose of this map is to provide an integrated, bi-national geologic map dataset for display and analyses on an Arc Internet Map Service (IMS) dedicated to environmental health studies in the United States-Mexico border region. The IMS web site was designed by the US-Mexico Border Environmental Health Initiative project and collaborators, and the IMS and project web site address is http://borderhealth.cr.usgs.gov/. The objective of the project is to acquire, evaluate, analyze, and provide earth, biologic, and human health resources data within a GIS framework (IMS) to further our understanding of possible linkages between the physical environment and public health issues. The geologic map dataset is just one of many datasets included in the web site; other datasets include biologic, hydrologic, geographic, and human health themes.
Should Data Frameworks be Inherently Multiscalar? A Use Case of the Living Atlas of the World
NASA Astrophysics Data System (ADS)
Wright, D. J.
2015-12-01
In addition to an individual research project, many researchers are involved in at least one major partnership, perhaps one ocean observatory, or one collaborative. The accompanying data framework may be focused on a subdiscipline of oceanography (i.e., marine geology and geophysics, physical oceanography, marine ecology, etc.) or particular study region. The data framework obviously exists to support research, but also collaboration in data collection, spatial analysis, visualization, and communication of the science to multiple audience. These interactions likely take place at multiple scales: the scale of the individual researcher, of small workgroups within a lab, or of inter-organizational collaboration. There are also frameworks that cut horizontally across discipline and region, connecting to broader national or global initiatives such as NSF EarthCube, other NSF-funded Research Coordination Networks, GEOSS, or ODIP. The Living Atlas of the World is presented as a use case of a data framework seeking to cut effectively across multiple spatial and temporal scales. The Living Atlas was first created in 2014 to make authoritative geographic information accessible via hosted cloud services so that users could more quickly address scientific and societal problems and decisions at spatial scales ranging from a small study area the entire globe, while using a range of interactive map functions to tell engaging narratives along the way (aka "story maps"). What began as a way to build trusted, authoritative, and freely available *basemaps* from data contributed online by the GIS community, has grown to a larger program extending far beyond basemap layers to satellite imagery, bathymetry, water column layers, and hydrology, as well elevation, human population, and 3D web scenes. The Living Atlas is continually under construction with new efforts that now extend beyond just the reading and serving of dataset, to the provisioning of spatial analysis on these *data services* in the cloud, as well as the crosswalking and sharing of workflows and use cases, additional apps for mobile, web, and desktop, community-building events where people gather face-to-face, and close interlinkages to other platforms such as ODIP and NSF EarthCube.
Analysis Tool Web Services from the EMBL-EBI.
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-07-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.
Analysis Tool Web Services from the EMBL-EBI
McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo
2013-01-01
Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338
Data List - Specifying and Acquiring Earth Science Data Measurements All at Once
NASA Astrophysics Data System (ADS)
Shie, C. L.; Teng, W. L.; Liu, Z.; Hearty, T. J., III; Shen, S.; Li, A.; Hegde, M.; Bryant, K.; Seiler, E.; Kempler, S. J.
2016-12-01
Natural phenomena, such as tropical storms (e.g., hurricane/typhoons), winter storms (e.g., blizzards) volcanic eruptions, floods, and drought, have the potential to cause immense property damage, great socioeconomic impact, and tragic losses of human life. In order to investigate and assess these natural hazards in a timely manner, there needs to be efficient searching and accessing of massive amounts of heterogeneous scientific data from, particularly, satellite and model products. This is a daunting task for most application users, decision makers, and science researchers. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has, for many years, archived and served massive amounts of Earth science data, along with value-added information and services. In order to facilitate the GES DISC users in acquiring their data of interest "all at once," with minimum effort, the GES DISC has started developing a value-added and knowledge-based data service framework. This framework allows the preparation and presentation to users of collections of data and their related resources for natural disaster events or other scientific themes. These collections of data, initially termed "Data Bundle" and then "Virtual Collections" and finally "Data Lists," contain suites of annotated Web addresses (URLs) that point to their respective data and resource addresses, "all at once" and "virtually." Because these collections of data are virtual, there is no need to duplicate the data. Currently available "Data Lists" for several natural disaster phenomena and the architecture of the data service framework will be presented.
Biological Web Service Repositories Review.
Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F
2017-05-01
Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
The value of Web-based library services at Cedars-Sinai Health System.
Halub, L P
1999-07-01
Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.
The value of Web-based library services at Cedars-Sinai Health System.
Halub, L P
1999-01-01
Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services. PMID:10427423
MedlinePlus Connect: How it Works
... it looks depends on how it is implemented. Web Application The Web application returns a formatted response ... for more examples of Web Application response pages. Web Service The MedlinePlus Connect REST-based Web service ...
Design and applications of a multimodality image data warehouse framework.
Wong, Stephen T C; Hoo, Kent Soo; Knowlton, Robert C; Laxer, Kenneth D; Cao, Xinhau; Hawkins, Randall A; Dillon, William P; Arenson, Ronald L
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications--namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains.
Design and Applications of a Multimodality Image Data Warehouse Framework
Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.
2002-01-01
A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885
Unifying Access to National Hydrologic Data Repositories via Web Services
NASA Astrophysics Data System (ADS)
Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.
2006-12-01
The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of WaterOneFlow web services is available. More information about the ongoing development of CUAHSI HIS is available from http://www.cuahsi.org/his/.
Federated querying architecture with clinical & translational health IT application.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
2011-10-01
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
A system framework of inter-enterprise machining quality control based on fractal theory
NASA Astrophysics Data System (ADS)
Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng
2014-03-01
In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.
ContextProvider: Context awareness for medical monitoring applications.
Mitchell, Michael; Meyers, Christopher; Wang, An-I Andy; Tyson, Gary
2011-01-01
Smartphones are sensor-rich and Internet-enabled. With their on-board sensors, web services, social media, and external biosensors, smartphones can provide contextual information about the device, user, and environment, thereby enabling the creation of rich, biologically driven applications. We introduce ContextProvider, a framework that offers a unified, query-able interface to contextual data on the device. Unlike other context-based frameworks, ContextProvider offers interactive user feedback, self-adaptive sensor polling, and minimal reliance on third-party infrastructure. ContextProvider also allows for rapid development of new context and bio-aware applications. Evaluation of ContextProvider shows the incorporation of an additional monitoring sensor into the framework with fewer than 100 lines of Java code. With adaptive sensor monitoring, power consumption per sensor can be reduced down to 1% overhead. Finally, through the use of context, accuracy of data interpretation can be improved by up to 80%.
Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; M., M.
2016-06-01
Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.
Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices
ERIC Educational Resources Information Center
Schrimpsher, Dan
2011-01-01
As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…
Connecting geoscience systems and data using Linked Open Data in the Web of Data
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna
2014-05-01
Linked Data or Linked Open Data (LOD) in the realm of free and publically accessible data is one of the most promising and most used semantic Web frameworks connecting various types of data and vocabularies including geoscience and related domains. The semantic Web extension to the commonly existing and used World Wide Web is based on the meaning of entities and relationships or in different words classes and properties used for data in a global data and information space, the Web of Data. LOD data is referenced and mash-uped by URIs and is retrievable using simple parameter controlled HTTP-requests leading to a result which is human-understandable or machine-readable. Furthermore the publishing and mash-up of data in the semantic Web realm is realized by specific Web standards, such as RDF, RDFS, OWL and SPARQL defined for the Web of Data. Semantic Web based mash-up is the Web method to aggregate and reuse various contents from different sources, such as e.g. using FOAF as a model and vocabulary for the description of persons and organizations -in our case- related to geoscience projects, instruments, observations, data and so on. On the example of three different geoscience data and information management systems, such as ESPAS, IUGONET and GFZ ISDC and the associated science data and related metadata or better called context data, the concept of the mash-up of systems and data using the semantic Web approach and the Linked Open Data framework is described in this publication. Because the three systems are based on different data models, data storage structures and technical implementations an extra semantic Web layer upon the existing interfaces is used for mash-up solutions. In order to satisfy the semantic Web standards, data transition processes, such as the transfer of content stored in relational databases or mapped in XML documents into SPARQL capable databases or endpoints using D2R or XSLT is necessary. In addition, the use of mapped and/or merged domain specific and cross-domain vocabularies in the sense of terminological ontologies are the foundation for a virtually unified data retrieval and access in IUGONET, ESPAS and GFZ ISDC data management systems. SPARQL endpoints realized either by originally RDF databases, e.g. Virtuoso or by virtual SPARQL endpoints, e.g. D2R services enable an only upon Web standard-based mash-up of domain-specific systems and data, such as in this case the space weather and geomagnetic domain but also cross-domain connection to data and vocabularies, e.g. related to NASA's VxOs, particularly VWO or NASA's PDS data system within LOD. LOD - Linked Open Data RDF - Resource Description Framework RDFS - RDF Schema OWL - Ontology Web Language SPARQL - SPARQL Protocol and RDF Query Language FOAF - Friends of a Friend ontology ESPAS - Near Earth Space Data Infrastructure for e-Science (Project) IUGONET - Inter-university Upper Atmosphere Global Observation Network (Project) GFZ ISDC - German Research Centre for Geosciences Information System and Data Center XML - Extensible Mark-up Language D2R - (Relational) Database to RDF (Transformation) XSLT - Extensible Stylesheet Language Transformation Virtuoso - OpenLink Virtuoso Universal Server (including RDF data management) NASA - National Aeronautics and Space Administration VOx - Virtual Observatories VWO - Virtual Wave Observatory PDS - Planetary Data System
Towards Web-based representation and processing of health information
Gao, Sheng; Mioc, Darka; Yi, Xiaolun; Anton, Francois; Oldfield, Eddie; Coleman, David J
2009-01-01
Background There is great concern within health surveillance, on how to grapple with environmental degradation, rapid urbanization, population mobility and growth. The Internet has emerged as an efficient way to share health information, enabling users to access and understand data at their fingertips. Increasingly complex problems in the health field require increasingly sophisticated computer software, distributed computing power, and standardized data sharing. To address this need, Web-based mapping is now emerging as an important tool to enable health practitioners, policy makers, and the public to understand spatial health risks, population health trends and vulnerabilities. Today several web-based health applications generate dynamic maps; however, for people to fully interpret the maps they need data source description and the method used in the data analysis or statistical modeling. For the representation of health information through Web-mapping applications, there still lacks a standard format to accommodate all fixed (such as location) and variable (such as age, gender, health outcome, etc) indicators in the representation of health information. Furthermore, net-centric computing has not been adequately applied to support flexible health data processing and mapping online. Results The authors of this study designed a HEalth Representation XML (HERXML) schema that consists of the semantic (e.g., health activity description, the data sources description, the statistical methodology used for analysis), geometric, and cartographical representations of health data. A case study has been carried on the development of web application and services within the Canadian Geospatial Data Infrastructure (CGDI) framework for community health programs of the New Brunswick Lung Association. This study facilitated the online processing, mapping and sharing of health information, with the use of HERXML and Open Geospatial Consortium (OGC) services. It brought a new solution in better health data representation and initial exploration of the Web-based processing of health information. Conclusion The designed HERXML has been proven to be an appropriate solution in supporting the Web representation of health information. It can be used by health practitioners, policy makers, and the public in disease etiology, health planning, health resource management, health promotion and health education. The utilization of Web-based processing services in this study provides a flexible way for users to select and use certain processing functions for health data processing and mapping via the Web. This research provides easy access to geospatial and health data in understanding the trends of diseases, and promotes the growth and enrichment of the CGDI in the public health sector. PMID:19159445
An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities
NASA Astrophysics Data System (ADS)
Amirebrahimi, S.; Rajabifard, A.
2012-07-01
Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.
ballaxy: web services for structural bioinformatics.
Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas
2015-01-01
Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling. ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wired Widgets: Agile Visualization for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Gerschefske, K.; Witmer, J.
2012-09-01
Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.
Managing the Web-Enhanced Geographic Information Service.
ERIC Educational Resources Information Center
Stephens, Denise
1997-01-01
Examines key management issues involved in delivering geographic information services on the World Wide Web, using the Geographic Information Center (GIC) program at the University of Virginia Library as a reference. Highlights include integrating the Web into services; building collections for Web delivery; and evaluating spatial information…
Automated geospatial Web Services composition based on geodata quality requirements
NASA Astrophysics Data System (ADS)
Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael
2012-10-01
Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.
Advancing Collaboration through Hydrologic Data and Model Sharing
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.
2015-12-01
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
BioCatalogue: a universal catalogue of web services for the life sciences
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.
2010-01-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378
BioCatalogue: a universal catalogue of web services for the life sciences.
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A
2010-07-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.
Customer Decision Making in Web Services with an Integrated P6 Model
NASA Astrophysics Data System (ADS)
Sun, Zhaohao; Sun, Junqing; Meredith, Grant
Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.
Monitoring performance of a highly distributed and complex computing infrastructure in LHCb
NASA Astrophysics Data System (ADS)
Mathe, Z.; Haen, C.; Stagni, F.
2017-10-01
In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.
Exposing the cancer genome atlas as a SPARQL endpoint
Deus, Helena F.; Veiga, Diogo F.; Freire, Pablo R.; Weinstein, John N.; Mills, Gordon B.; Almeida, Jonas S.
2011-01-01
The Cancer Genome Atlas (TCGA) is a multidisciplinary, multi-institutional effort to characterize several types of cancer. Datasets from biomedical domains such as TCGA present a particularly challenging task for those interested in dynamically aggregating its results because the data sources are typically both heterogeneous and distributed. The Linked Data best practices offer a solution to integrate and discover data with those characteristics, namely through exposure of data as Web services supporting SPARQL, the Resource Description Framework query language. Most SPARQL endpoints, however, cannot easily be queried by data experts. Furthermore, exposing experimental data as SPARQL endpoints remains a challenging task because, in most cases, data must first be converted to Resource Description Framework triples. In line with those requirements, we have developed an infrastructure to expose clinical, demographic and molecular data elements generated by TCGA as a SPARQL endpoint by assigning elements to entities of the Simple Sloppy Semantic Database (S3DB) management model. All components of the infrastructure are available as independent Representational State Transfer (REST) Web services to encourage reusability, and a simple interface was developed to automatically assemble SPARQL queries by navigating a representation of the TCGA domain. A key feature of the proposed solution that greatly facilitates assembly of SPARQL queries is the distinction between the TCGA domain descriptors and data elements. Furthermore, the use of the S3DB management model as a mediator enables queries to both public and protected data without the need for prior submission to a single data source. PMID:20851208
Choi, Okkyung; Han, SangYong
2007-01-01
Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.
Usability evaluation of cloud-based mapping tools for the display of very large datasets
NASA Astrophysics Data System (ADS)
Stotz, Nicole Marie
The elasticity and on-demand nature of cloud services have made it easier to create web maps. Users only need access to a web browser and the Internet to utilize cloud based web maps, eliminating the need for specialized software. To encourage a wide variety of users, a map must be well designed; usability is a very important concept in designing a web map. Fusion Tables, a new product from Google, is one example of newer cloud-based distributed GIS services. It allows for easy spatial data manipulation and visualization, within the Google Maps framework. ESRI has also introduced a cloud based version of their software, called ArcGIS Online, built on Amazon's EC2 cloud. Utilizing a user-centered design framework, two prototype maps were created with data from the San Diego East County Economic Development Council. One map was built on Fusion Tables, and another on ESRI's ArcGIS Online. A usability analysis was conducted and used to compare both map prototypes in term so of design and functionality. Load tests were also ran, and performance metrics gathered on both map prototypes. The usability analysis was taken by 25 geography students, and consisted of time based tasks and questions on map design and functionality. Survey participants completed the time based tasks for the Fusion Tables map prototype quicker than those of the ArcGIS Online map prototype. While response was generally positive towards the design and functionality of both prototypes, overall the Fusion Tables map prototype was preferred. For the load tests, the data set was broken into 22 groups for a total of 44 tests. While the Fusion Tables map prototype performed more efficiently than the ArcGIS Online prototype, differences are almost unnoticeable. A SWOT analysis was conducted for each prototype. The results from this research point to the Fusion Tables map prototype. A redesign of this prototype would incorporate design suggestions from the usability survey, while some functionality would need to be dropped. This is a free product and would therefore be the best option if cost is an issue, but this map may not be supported in the future.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
The Organizational Role of Web Services
ERIC Educational Resources Information Center
Mitchell, Erik
2011-01-01
The workload of Web librarians is already split between Web-related and other library tasks. But today's technological environment has created new implications for existing services and new demands for staff time. It is time to reconsider how libraries can best allocate resources to provide effective Web services. Delivering high-quality services…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Web site at http://www.Regulations.gov under e-Docket ID number USCIS-2013- 0003. When submitting... information collection. (2) Title of the Form/Collection: Online Survey of Web Services Employers. (3) Agency...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-16
...-NEW] Agency Information Collection Activities: Online Survey of Web Services Employers; New... Information Collection: New information collection. (2) Title of the Form/Collection: Online Survey of Web... sector. It is necessary that USCIS obtains data on the E-Verify Program Web Services. Gaining an...
IDIS Small Bodies and Dust Node
NASA Astrophysics Data System (ADS)
de Sanctis, M. C.; Capria, M. T.; Carraro, F.; Fonte, S.; Giacomini, L.; Turrini, D.
2009-04-01
The EuroPlaNet information service provides access to lists of researchers, laboratories and data archives relevant to many aspects of planetary and space physics. Information can be accessed through EuroPlaNet website or, for advanced searches, via web-services available at the different thematic nodes. The goal of IDIS is to provide easy-to-use access to resources like people, laboratories, modeling activities and data archives related to planetary sciences. The development of IDIS is an international effort started under the European Commission's 6th Framework Programme and which will expand its capabilities during the 7th Framework Programme, as part of the Capacities Specific Programme/Research Infrastructures. IDIS is complemented by a set of other EuroPlaNet web-services maintained under the responsibility of separate institutions. Each activity maintains its own web-portal with cross-links pointing to the other elements of EuroPlaNet. General access is provided via the EuroPlaNet Homepage. IDIS is not a repository of original data but rather supports the access to various data sources. The final goal of IDIS is to provide Virtual Observatory tools for the access to data from laboratory measurements and ground- and spaced-based observations to modeling results, allowing the combination of as divergent data sources as feasible. IDIS is built around four scientific nodes located in different European countries. Each node deals with a subset of the disciplines related to planetary sciences and, working in cooperation with international experts in these fields, provides a wealth of information to the international planetary science community. The EuroPlaNet IDIS thematic node "Small Bodies and Dust Node" is hosted by the Istituto di Fisica dello Spazio Interplanetario and is established in close cooperation with the Istituto di Astrofisica Spaziale. Both these institutes are part of the Istituto Nazionale di Astrofisica (INAF). The IDIS Small Bodies and Dust Node aims at becoming a focus point in the fields of Solar System's minor bodies and interplanetary dust by providing the community with a central, user friendly resource and service inventory and contact point. The main aim of the Small Bodies and Dust Node will be to: • support collaborative work in the field of Small Bodies and Dust • provide information about databases and scientific tools in this field • establish a scientific information management system • define and develop Science Cases regarding IDIS
Protecting Database Centric Web Services against SQL/XPath Injection Attacks
NASA Astrophysics Data System (ADS)
Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique
Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.
A Web service substitution method based on service cluster nets
NASA Astrophysics Data System (ADS)
Du, YuYue; Gai, JunJing; Zhou, MengChu
2017-11-01
Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.
River Basin Standards Interoperability Pilot
NASA Astrophysics Data System (ADS)
Pesquer, Lluís; Masó, Joan; Stasch, Christoph
2016-04-01
There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture tests the combination of Gauge data in a WPS that is triggered by a meteorological alert. The data is translated into OGC WaterML 2.0 time series data format and will be ingested in a SOS 2.0. SOS data is visualized in a SOS Client that is able to handle time series. The meteorological forecast data (with the supervision of an operator manipulating the WPS user interface) ingests with WaterML 2.0 time series and terrain data is input for a flooding modelling algorithm. The WPS is able to produce flooding datasets in the form of coverages that is offered to clients via a WCS 2.0 service or a WMS 1.3 service, and downloaded and visualized by the respective clients. The WPS triggers a notification or an alert that will be monitored from an emergency control response service. Acronyms AS: Alert Service ES: Event Service ICT: Information and Communication Technology NS: Notification Service OGC: Open Geospatial Consortium RIBASE: River Basin Standards Interoperability Pilot SOS: Sensor Observation Service WaterML: Water Markup Language WCS: Web Coverage Service WMS: Web Map Service WPS: Web Processing Service
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.
2007-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.
A Role for Semantic Web Technologies in Patient Record Data Collection
NASA Astrophysics Data System (ADS)
Ogbuji, Chimezie
Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.
DOPA, a Digital Observatory for Protected Areas including Monitoring and Forecasting Services
NASA Astrophysics Data System (ADS)
Dubois, Gregoire; Hartley, Andrew; Peedell, Stephen; de Jesus, Jorge; Ó Tuama, Éamonn; Cottam, Andrew; May, Ian; Fisher, Ian; Nativi, Stefano; Bertrand, Francis
2010-05-01
The Digital Observatory for Protected Areas (DOPA) is a biodiversity information system currently developed as an interoperable web service at the Joint Research Centre of the European Commission in collaboration with other international organizations, including GBIF, UNEP-WCMC, Birdlife International and RSPB. DOPA is designed to assess the state and pressure of Protected Areas (PAs) and to prioritize them accordingly, in order to support decision making and fund allocation processes. To become an operational web service allowing the automatic monitoring of protected areas, DOPA needs to be able to capture the dynamics of spatio-temporal changes in habitats and anthropogenic pressure on PAs as well as the changes in the species distributions. Because some of the most valuable natural ecosystems and species on the planet cover large areas making field monitoring methods very difficult for a large scale assessment, the automatic collection and processing of remote sensing data are processes at the heart of the problem. To further be able to forecast changes due to climate change, DOPA has to rely on an architecture that enables it to communicate with the appropriate modeling web services. The purpose of this presentation is to present the architecture of the DOPA with special attention to e-Habitat, its web processing service designed for assessing the irreplaceability of habitats as well as for the modeling of habitats under different climate change scenarios. The use of open standards for spatial data and of open source programming languages for the development of the core functionalities of the system are expected to encourage the participation of the scientific community beyond the current partnerships and to favour the sharing of such an observatory which could be installed at any other location. Acknowledgement: Part of this work is funded under the 7th Framework Programme by the EuroGEOSS (www.eurogeoss.eu) project of the European Commission. The views expressed herein are those of the authors and are not necessarily those of the European Commission. References: Dubois, G. Hartley, A., Nelson, A., Mayaux, P. and J.M. Grégoire (2009). Towards an interoperable web service for the monitoring of African protected areas. In: "Proceedings of the 33rd International Symposium on Remote Sensing of Environment (ISRSE)", May 4-8, 2009 Stresa, Italy Hartley, A., A. Nelson, P. Mayaux and J.M. Grégoire. The Assessment of African Protected Areas, Scientific and Technical Reports. Office for Official Publications of the European Communities, Luxembourg, EUR 22780 EN, 70 pp., 2007.
A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services
Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan
2014-01-01
Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937
User-driven generation of standard data services
NASA Astrophysics Data System (ADS)
Díaz, Laura; Granell, Carlos; Gould, Michael; Huerta, Joaquín.
2010-05-01
Geospatial Information systems are experiencing the shift from monolithic to distributed environments (Bernard, 2003). Current research trends for discover and access of geospatial resources, in these distributed environments, are being addressed by deployment of interconnected Spatial Data Infrastructure (SDI) nodes at different scales to build a global spatial information infrastructure (Masser et al., 2008; Rajabifard et al., 2002). One of the challenges for implementing these global and multiscale SDIs is to agree with common standards in consideration with heterogeneity of various stakeholders [Masser 2005]. In Europe, the European Commission took the INSPIRE initiative to monitor the development of European SDIs. INSPIRE Directive addresses the need for web services to discover, view, transform, invoke, and download geospatial resources, which enable various stakeholders to share resources in an interoperable manner [INSPIRE 2007]. Such web services require technical specifications for the interoperability and harmonization of their SDIs [INSPIRE 2007]. Moreover, interoperability is ensured by a number of specification efforts, in the geo domain most prominently by ISO/TC 211 and the OpenGIS Consortium (OGC) (Bernard, 2003). Other research challenges regarding SDI are on one hand how to handle complexity by users in charge of maintaining SDIs as they grow, and on the other hand the fact the SDI maintenance and evolution should be guided (Bejar et al, 2009). So there is a motivation to improve the complex deployment mechanisms in SDI since there is a need of expertise and time to deploy resources and integrate them by means of standard services. In this context we present an architecture following the INSPIRE technical guidelines and therefore based on SDI principles. This architecture supports distributed applications and provides components to assist users in deploying and updating SDI resources. Therefore mechanisms and components for the automatic generation and publication of standard geospatial are proposed. These mechanisms deal with the fact of hiding the underlying technology and let stakeholders wrap resources as standard services to share these resources in a transparent manner. These components are integrated in our architecture within the Service Framework node (module). PIC Figure 1: Figure 1. Architecture components diagram Figure 1 shows the components of the architecture: The Application Node provides the entry point for users to run distributed applications. This software component has the user interface and the application logic. The Service Connector component provides the ability to connect to the services available in the middleware layer of SDI. This node acts as a socket to OGC Web Services. For instance we appreciate the WMS component implementing the OGC WMS specification as it is the standard recommended by the INSPIRE implementation rules as View Service Type.The Service Framework node contains several components. The Service Framework main functionality is to assist users in wrapping and sharing geospatial resources. It implements the proposed mechanisms to improve the availability and visibility of geospatial resources. The main components of this framework are the Data wrapper, the Process Wrapper and the Service Publisher. The Data Wrapper and Process Wrapper components guide users to wrap data and tools as standard services according with INSPIRE implementing rules (availability). The Service Publisher component aims at creating service metadata and publishing them in catalogues (visibility). Roughly speaking, all of these components are concerned with the idea of acting as a service generator and publisher, i.e., they get a resource (data or process) and return an INSPIRE service that will be published in catalogue services. References Béjar, R., Latre, M. Á., Nogueras-Iso, J., Muro-Medrano, P. R., Zarazaga-Soria, F. J. 2009. International Journal of Geographical Information Science, 23(3), 271-294. Bernard, L, U Einspanier, M Lutz & C Portele. Interoperability in GI Service Chains The Way Forward. In: M. Gould, R. Laurini & S. Coulondre (Eds.). 6th AGILE Conference on Geographic Information Science 2003, Lyon: 179-188. INSPIRE. Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an Infrastructure for Spatial Information in the European Community. (2007) Masser, I. GIS Worlds: Creating Spatial Data Infrastructures. Redlands, California. ESRI Press. (2005) Masser, I., Rajabifard, A., Williamson, I. 2008. Spatially enabling governments through SDI implementation. International Journal of Geographical Information Science. Vol. 22, No. 1, (2008) 5-20 Rajabifard, A., Feeney, M-E. F., Williamson, I. P. 2002. Future directions for SDI development. International Journal of Applied Earth Observation and Geoinformation 4 (2002) 11-22
A Walk through TRIDEC's intermediate Tsunami Early Warning System
NASA Astrophysics Data System (ADS)
Hammitzsch, M.; Reißland, S.; Lendholt, M.
2012-04-01
The management of natural crises is an important application field of the technology developed in the project Collaborative, Complex, and Critical Decision-Support in Evolving Crises (TRIDEC), co-funded by the European Commission in its Seventh Framework Programme. TRIDEC is based on the development of the German Indonesian Tsunami Early Warning System (GITEWS) and the Distant Early Warning System (DEWS) providing a service platform for both sensor integration and warning dissemination. In TRIDEC new developments in Information and Communication Technology (ICT) are used to extend the existing platform realising a component-based technology framework for building distributed tsunami warning systems for deployment, e.g. in the North-eastern Atlantic, the Mediterranean and Connected Seas (NEAM) region. The TRIDEC system will be implemented in three phases, each with a demonstrator. Successively, the demonstrators are addressing challenges, such as the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulation tools and data fusion tools. In addition to conventional sensors also unconventional sensors and sensor networks play an important role in TRIDEC. The system version presented is based on service-oriented architecture (SOA) concepts and on relevant standards of the Open Geospatial Consortium (OGC), the World Wide Web Consortium (W3C) and the Organization for the Advancement of Structured Information Standards (OASIS). In this way the system continuously gathers, processes and displays events and data coming from open sensor platforms to enable operators to quickly decide whether an early warning is necessary and to send personalized warning messages to the authorities and the population at large through a wide range of communication channels. The system integrates OGC Sensor Web Enablement (SWE) compliant sensor systems for the rapid detection of hazardous events, like earthquakes, sea level anomalies, ocean floor occurrences, and ground displacements. Using OGC Web Map Service (WMS) and Web Feature Service (WFS) spatial data are utilized to depict the situation picture. The integration of a simulation system to identify affected areas is considered using the OGC Web Processing Service (WPS). Warning messages are compiled and transmitted in the OASIS Common Alerting Protocol (CAP) together with addressing information defined via the OASIS Emergency Data Exchange Language - Distribution Element (EDXL-DE). The first system demonstrator has been designed and implemented to support plausible scenarios demonstrating the treatment of simulated tsunami threats with an essential subset of a National Tsunami Warning Centre (NTWC). The feasibility and the potentials of the implemented approach are demonstrated covering standard operations as well as tsunami detection and alerting functions. The demonstrator presented addresses information management and decision-support processes in a hypothetical natural crisis situation caused by a tsunami in the Eastern Mediterranean. Developments of the system are based to the largest extent on free and open source software (FOSS) components and industry standards. Emphasis has been and will be made on leveraging open source technologies that support mature system architecture models wherever appropriate. All open source software produced is foreseen to be published on a publicly available software repository thus allowing others to reuse results achieved and enabling further development and collaboration with a wide community including scientists, developers, users and stakeholders. This live demonstration is linked with the talk "TRIDEC Natural Crisis Management Demonstrator for Tsunamis" (EGU2012-7275) given in the session "Architecture of Future Tsunami Warning Systems" (NH5.7/ESSI1.7).
Kawamoto, Kensaku; Lobach, David F
2005-01-01
Despite their demonstrated ability to improve care quality, clinical decision support systems are not widely used. In part, this limited use is due to the difficulty of sharing medical knowledge in a machine-executable format. To address this problem, we developed a decision support Web service known as SEBASTIAN. In SEBASTIAN, individual knowledge modules define the data requirements for assessing a patient, the conclusions that can be drawn using that data, and instructions on how to generate those conclusions. Using standards-based XML messages transmitted over HTTP, client decision support applications provide patient data to SEBASTIAN and receive patient-specific assessments and recommendations. SEBASTIAN has been used to implement four distinct decision support systems; an architectural overview is provided for one of these systems. Preliminary assessments indicate that SEBASTIAN fulfills all original design objectives, including the re-use of executable medical knowledge across diverse applications and care settings, the straightforward authoring of knowledge modules, and use of the framework to implement decision support applications with significant clinical utility.
Web Services as Public Services: Are We Supporting Our Busiest Service Point?
ERIC Educational Resources Information Center
Riley-Huff, Debra A.
2009-01-01
This article is an analysis of academic library organizational culture, patterns, and processes as they relate to Web services. Data gathered in a research survey is examined in an attempt to reveal current departmental and administrative attitudes, practices, and support for Web services in the library research environment. (Contains 10 tables.)
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
An Architecture for Autonomic Web Service Process Planning
NASA Astrophysics Data System (ADS)
Moore, Colm; Xue Wang, Ming; Pahl, Claus
Web service composition is a technology that has received considerable attention in the last number of years. Languages and tools to aid in the process of creating composite Web services have been received specific attention. Web service composition is the process of linking single Web services together in order to accomplish more complex tasks. One area of Web service composition that has not received as much attention is the area of dynamic error handling and re-planning, enabling autonomic composition. Given a repository of service descriptions and a task to complete, it is possible for AI planners to automatically create a plan that will achieve this goal. If however a service in the plan is unavailable or erroneous the plan will fail. Motivated by this problem, this paper suggests autonomous re-planning as a means to overcome dynamic problems. Our solution involves automatically recovering from faults and creating a context-dependent alternate plan. We present an architecture that serves as a basis for the central activities autonomous composition, monitoring and fault handling.
Research of marine sensor web based on SOA and EDA
NASA Astrophysics Data System (ADS)
Jiang, Yongguo; Dou, Jinfeng; Guo, Zhongwen; Hu, Keyong
2015-04-01
A great deal of ocean sensor observation data exists, for a wide range of marine disciplines, derived from in situ and remote observing platforms, in real-time, near-real-time and delayed mode. Ocean monitoring is routinely completed using sensors and instruments. Standardization is the key requirement for exchanging information about ocean sensors and sensor data and for comparing and combining information from different sensor networks. One or more sensors are often physically integrated into a single ocean `instrument' device, which often brings in many challenges related to diverse sensor data formats, parameters units, different spatiotemporal resolution, application domains, data quality and sensors protocols. To face these challenges requires the standardization efforts aiming at facilitating the so-called Sensor Web, which making it easy to provide public access to sensor data and metadata information. In this paper, a Marine Sensor Web, based on SOA and EDA and integrating the MBARI's PUCK protocol, IEEE 1451 and OGC SWE 2.0, is illustrated with a five-layer architecture. The Web Service layer and Event Process layer are illustrated in detail with an actual example. The demo study has demonstrated that a standard-based system can be built to access sensors and marine instruments distributed globally using common Web browsers for monitoring the environment and oceanic conditions besides marine sensor data on the Web, this framework of Marine Sensor Web can also play an important role in many other domains' information integration.
CoP Sensing Framework on Web-Based Environment
NASA Astrophysics Data System (ADS)
Mustapha, S. M. F. D. Syed
The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.
Automatic information timeliness assessment of diabetes web sites by evidence based medicine.
Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba
2014-11-01
Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ABrowse--a customizable next-generation genome browser framework.
Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge
2012-01-05
With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.
Domain-specific Web Service Discovery with Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Caverlee, J; Liu, L
2005-02-14
This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less
NASA Astrophysics Data System (ADS)
Argenti, M.; Giannini, V.; Averty, R.; Bigagli, L.; Dumoulin, J.
2012-04-01
The EC FP7 ISTIMES project has the goal of realizing an ICT-based system exploiting distributed and local sensors for non destructive electromagnetic monitoring in order to make critical transport infrastructures more reliable and safe. Higher situation awareness thanks to real time and detailed information and images of the controlled infrastructure status allows improving decision capabilities for emergency management stakeholders. Web-enabled sensors and a service-oriented approach are used as core of the architecture providing a sys-tem that adopts open standards (e.g. OGC SWE, OGC CSW etc.) and makes efforts to achieve full interoperability with other GMES and European Spatial Data Infrastructure initiatives as well as compliance with INSPIRE. The system exploits an open easily scalable network architecture to accommodate a wide range of sensors integrated with a set of tools for handling, analyzing and processing large data volumes from different organizations with different data models. Situation Awareness tools are also integrated in the system. Definition of sensor observations and services follows a metadata model based on the ISO 19115 Core set of metadata elements and the O&M model of OGC SWE. The ISTIMES infrastructure is based on an e-Infrastructure for geospatial data sharing, with a Data Cata-log that implements the discovery services for sensor data retrieval, acting as a broker through static connections based on standard SOS and WNS interfaces; a Decision Support component which helps decision makers providing support for data fusion and inference and generation of situation indexes; a Presentation component which implements system-users interaction services for information publication and rendering, by means of a WEB Portal using SOA design principles; A security framework using Shibboleth open source middleware based on the Security Assertion Markup Language supporting Single Sign On (SSO). ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663
NASA Astrophysics Data System (ADS)
Poussot-Vassal, Charles; Tanelli, Mara; Lovera, Marco
The complexity of Information Technology (IT) systems is steadily increasing and system complexity has been recognised as the main obstacle to further advancements of IT. This fact has recently raised energy management issues. Control techniques have been proposed and successfully applied to design Autonomic Computing systems, trading-off system performance with energy saving goals. As users behaviour is highly time varying and workload conditions can change substantially within the same business day, the Linear Parametrically Varying (LPV) framework is particularly promising for modeling such systems. In this chapter, a control-theoretic method to investigate the trade-off between Quality of Service (QoS) requirements and energy saving objectives in the case of admission control in Web service systems is proposed, considering as control variables the server CPU frequency and the admission probability. To quantitatively evaluate the trade-off, a dynamic model of the admission control dynamics is estimated via LPV identification techniques. Based on this model, an optimisation problem within the Model Predictive Control (MPC) framework is setup, by means of which it is possible to investigate the optimal trade-off policy to manage QoS and energy saving objectives at design time and taking into explicit account the system dynamics.
Available, intuitive and free! Building e-learning modules using web 2.0 services.
Tam, Chun Wah Michael; Eastwood, Anne
2012-01-01
E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
...; Comment Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer... currently valid OMB control number. Proposed Collection: Title: NCI Cancer Genetics Services Directory Web... application form and the Web-based update mailer is to collect information about genetics professionals to be...
EUMIS - an open portal framework for interoperable marine environmental services
NASA Astrophysics Data System (ADS)
Hamre, T.; Sandven, S.; Leadbetter, A.; Gouriou, V.; Dunne, D.; Grant, M.; Treguer, M.; Torget, Ø.
2012-04-01
NETMAR (Open service network for marine environmental data) is an FP7 project that aims to develop a pilot European Marine Information System (EUMIS) for searching, downloading and integrating satellite, in situ and model data from ocean and coastal areas. EUMIS will use a semantic framework coupled with ontologies for identifying and accessing distributed data, such as near-real time, forecast and historical data. Four pilots have been defined to clarify the needs for satellite, in situ and model based products and services in selected user communities. The pilots are: · Pilot 1: Arctic Sea Ice Monitoring and Forecasting · Pilot 2: Oil spill drift forecast and shoreline cleanup assessment services in France · Pilot 3: Ocean colour - Marine Ecosystem, Research and Monitoring · Pilot 4: International Coastal Atlas Network (ICAN) for coastal zone management NETMAR is developing a set of data delivery services for the targeted user communities by means of standard web-GIS and OPeNDAP protocols. Processing services and adaptive service chaining services will also be developed, to enable users to generate new products suited to their needs. Both data retrieved from online repositories as well as the products generated dynamically can be accessed and visualised in the EUMIS portal. For this purpose, a GIS Viewer, a Service Chaining Editor and a Ontology Browser/Discovery Client have been developed and integrated in EUMIS. The EUMIS portal is developed using a portal framework that is compliant with the JSR-168 (Java Portlet Specification 1.0) and JSR-286 (Java Portlet Specification, 2.0) standards. These standards defines the interface (contract) and lifecycle management for a portal system component, a portlet, which can be implemented in a number of programming languages, not only Java. The GIS Viewer is developed using a combination of Java, JavaScript and JSF (e.g. MapFaces). The Service chaining editor is implemented in JavaScript (using different libraries like jQuery and WireIt), and the Ontology Browser/Discovery Client by means of Adobe Flex. In addition to the portlets developed in the project, we have also used several of the pre-built portlets that come with the Liferay Community Edition portal framework, notably the wiki, forum and RSS feed portlets. The presentation will focus on the developed system components and show some examples of products and services from the defined pilots.
Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne
2018-03-23
Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic-Kerry, Catherine King, Kathleen O'Moore, Melinda Achilles, Bridianne O'Dea. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 23.03.2018.
Lobban, Fiona; Appleton, Victoria; Appelbe, Duncan; Barraclough, Johanna; Bowland, Julie; Fisher, Naomi R; Foster, Sheena; Johnson, Sonia; Lewis, Elizabeth; Mateus, Céu; Mezes, Barbara; Murray, Elizabeth; O'Hanlon, Puffin; Pinfold, Vanessa; Rycroft-Malone, Jo; Siddle, Ron; Smith, Jo; Sutton, Chris J; Walker, Andrew; Jones, Steven H
2017-12-28
Web-based interventions to support people to manage long-term health conditions are available and effective but rarely used in clinical services. The aim of this study is to identify critical factors impacting on the implementation of an online supported self-management intervention for relatives of people with recent onset psychosis or bipolar disorder into routine clinical care and to use this information to inform an implementation plan to facilitate widespread use and inform wider implementation of digital health interventions. A multiple case study design within six early intervention in psychosis (EIP) services in England, will be used to test and refine theory-driven hypotheses about factors impacting on implementation of the Relatives' Education And Coping Toolkit (REACT). Qualitative data including behavioural observation, document analysis, and in-depth interviews collected in the first two EIP services (wave 1) and analysed using framework analysis, combined with quantitative data describing levels of use by staff and relatives and impact on relatives' distress and wellbeing, will be used to identify factors impacting on implementation. Consultation via stakeholder workshops with staff and relatives and co-facilitated by relatives in the research team will inform development of an implementation plan to address these factors, which will be evaluated and refined in the four subsequent EIP services in waves 2 and 3. Transferability of the implementation plan to non-participating services will be explored. Observation of implementation in a real world clinical setting, across carefully sampled services, in real time provides a unique opportunity to understand factors impacting on implementation likely to be generalizable to other web-based interventions, as well as informing further development of implementation theories. However, there are inherent challenges in investigating implementation without influencing the process under observation. We outline our strategies to ensure our design is transparent, flexible, and responsive to the timescales and activities happening within each service whilst also meeting the aims of the project. ISCTRN 16267685 (09/03/2016).
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
Multidisciplinary eHealth Survey Evaluation Methods
ERIC Educational Resources Information Center
Karras, Bryant T.; Tufano, James T.
2006-01-01
This paper describes the development process of an evaluation framework for describing and comparing web survey tools. We believe that this approach will help shape the design, development, deployment, and evaluation of population-based health interventions. A conceptual framework for describing and evaluating web survey systems will enable the…
Marenco, Luis; Ascoli, Giorgio A; Martone, Maryann E; Shepherd, Gordon M; Miller, Perry L
2008-09-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information's (NCBI's) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation.
Ascoli, Giorgio A.; Martone, Maryann E.; Shepherd, Gordon M.; Miller, Perry L.
2009-01-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information’s (NCBI’s) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation. PMID:18975149
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... Request; NCI Cancer Genetics Services Directory Web-Based Application Form and Update Mailer Summary: In... Cancer Genetics Services Directory Web-based Application Form and Update Mailer. [[Page 14035
Building asynchronous geospatial processing workflows with web services
NASA Astrophysics Data System (ADS)
Zhao, Peisheng; Di, Liping; Yu, Genong
2012-02-01
Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.
NASA Astrophysics Data System (ADS)
Khazaeli, S.; Ravandi, A. G.; Banerji, S.; Bagchi, A.
2016-04-01
Recently, data-driven models for Structural Health Monitoring (SHM) have been of great interest among many researchers. In data-driven models, the sensed data are processed to determine the structural performance and evaluate the damages of an instrumented structure without necessitating the mathematical modeling of the structure. A framework of data-driven models for online assessment of the condition of a structure has been developed here. The developed framework is intended for automated evaluation of the monitoring data and structural performance by the Internet technology and resources. The main challenges in developing such framework include: (a) utilizing the sensor measurements to estimate and localize the induced damage in a structure by means of signal processing and data mining techniques, and (b) optimizing the computing and storage resources with the aid of cloud services. The main focus in this paper is to demonstrate the efficiency of the proposed framework for real-time damage detection of a multi-story shear-building structure in two damage scenarios (change in mass and stiffness) in various locations. Several features are extracted from the sensed data by signal processing techniques and statistical methods. Machine learning algorithms are deployed to select damage-sensitive features as well as classifying the data to trace the anomaly in the response of the structure. Here, the cloud computing resources from Amazon Web Services (AWS) have been used to implement the proposed framework.
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.
Web Services and Other Enhancements at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.
2012-12-01
The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These data can be accessed through the above web services and through special NCEDC web pages.
Replacing missing values using trustworthy data values from web data sources
NASA Astrophysics Data System (ADS)
Izham Jaya, M.; Sidi, Fatimah; Mat Yusof, Sharmila; Suriani Affendey, Lilly; Ishak, Iskandar; Jabar, Marzanah A.
2017-09-01
In practice, collected data usually are incomplete and contains missing value. Existing approaches in managing missing values overlook the importance of trustworthy data values in replacing missing values. In view that trusted completed data is very important in data analysis, we proposed a framework of missing value replacement using trustworthy data values from web data sources. The proposed framework adopted ontology to map data values from web data sources to the incomplete dataset. As data from web is conflicting with each other, we proposed a trust score measurement based on data accuracy and data reliability. Trust score is then used to select trustworthy data values from web data sources for missing values replacement. We successfully implemented the proposed framework using financial dataset and presented the findings in this paper. From our experiment, we manage to show that replacing missing values with trustworthy data values is important especially in a case of conflicting data to solve missing values problem.
Research of three level match method about semantic web service based on ontology
NASA Astrophysics Data System (ADS)
Xiao, Jie; Cai, Fang
2011-10-01
An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.
Interoperability And Value Added To Earth Observation Data
NASA Astrophysics Data System (ADS)
Gasperi, J.
2012-04-01
Geospatial web services technology has provided a new means for geospatial data interoperability. Open Geospatial Consortium (OGC) services such as Web Map Service (WMS) to request maps on the Internet, Web Feature Service (WFS) to exchange vectors or Catalog Service for the Web (CSW) to search for geospatialized data have been widely adopted in the Geosciences community in general and in the remote sensing community in particular. These services make Earth Observation data available to a wider range of public users than ever before. The mapshup web client offers an innovative and efficient user interface that takes advantage of the power of interoperability. This presentation will demonstrate how mapshup can be effectively used in the context of natural disasters management.