Sample records for web services api

  1. Web Services--A Buzz Word with Potentials

    Treesearch

    János T. Füstös

    2006-01-01

    The simplest definition of a web service is an application that provides a web API. The web API exposes the functionality of the solution to other applications. The web API relies on other Internet-based technologies to manage communications. The resulting web services are pervasive, vendor-independent, language-neutral, and very low-cost. The main purpose of a web API...

  2. Using USNO's API to Obtain Data

    NASA Astrophysics Data System (ADS)

    Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun

    2015-01-01

    The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.

  3. API REST Web service and backend system Of Lecturer’s Assessment Information System on Politeknik Negeri Bali

    NASA Astrophysics Data System (ADS)

    Manuaba, I. B. P.; Rudiastini, E.

    2018-01-01

    Assessment of lecturers is a tool used to measure lecturer performance. Lecturer’s assessment variable can be measured from three aspects : teaching activities, research and community service. Broad aspect to measure the performance of lecturers requires a special framework, so that the system can be developed in a sustainable manner. Issues of this research is to create a API web service data tool, so the lecturer assessment system can be developed in various frameworks. The research was developed with web service and php programming language with the output of json extension data. The conclusion of this research is API web service data application can be developed using several platforms such as web, mobile application

  4. Restful API Architecture Based on Laravel Framework

    NASA Astrophysics Data System (ADS)

    Chen, Xianjun; Ji, Zhoupeng; Fan, Yu; Zhan, Yongsong

    2017-10-01

    Web service has been an industry standard tech for message communication and integration between heterogeneous systems. RESTFUL API has become mainstream web service development paradigm after SOAP, how to effectively construct RESTFUL API remains a research hotspots. This paper presents a development model of RESTFUL API construction based on PHP language and LARAVEL framework. The key technical problems that need to be solved during the construction of RESTFUL API are discussed, and implementation details based on LARAVEL are given.

  5. Developer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-21

    NREL's Developer Network, developer.nrel.gov, provides data that users can access to provide data to their own analyses, mobile and web applications. Developers can retrieve the data through a Web services API (application programming interface). The Developer Network handles overhead of serving up web services such as key management, authentication, analytics, reporting, documentation standards, and throttling in a common architecture, while allowing web services and APIs to be maintained and managed independently.

  6. Multi-National Information Sharing -- Cross Domain Collaborative Information Environment (CDCIE) Solution. Revision 4

    DTIC Science & Technology

    2005-04-12

    Hardware, Database, and Operating System independence using Java • Enterprise-class Architecture using Java2 Enterprise Edition 1.4 • Standards based...portal applications. Compliance with the Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote Portals...authentication and authorization • Portal Standards using Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote

  7. Migrating Department of Defense (DoD) Web Service Based Applications to Mobile Computing Platforms

    DTIC Science & Technology

    2012-03-01

    World Wide Web Consortium (W3C) Geolocation API to identify the device’s location and then center the map on the device. Finally, we modify the entry...THIS PAGE INTENTIONALLY LEFT BLANK xii List of Acronyms and Abbreviations API Application Programming Interface CSS Cascading Style Sheets CLIMO...Java API for XML Web Services Reference Implementation JS JavaScript JSNI JavaScript Native Interface METOC Meteorological and Oceanographic MAA Mobile

  8. Environmental Models as a Service: Enabling Interoperability ...

    EPA Pesticide Factsheets

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the

  9. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  10. The JANA calibrations and conditions database API

    NASA Astrophysics Data System (ADS)

    Lawrence, David

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  11. A RESTful API for accessing microbial community data for MG-RAST.

    PubMed

    Wilke, Andreas; Bischof, Jared; Harrison, Travis; Brettin, Tom; D'Souza, Mark; Gerlach, Wolfgang; Matthews, Hunter; Paczian, Tobias; Wilkening, Jared; Glass, Elizabeth M; Desai, Narayan; Meyer, Folker

    2015-01-01

    Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MG-RAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http://kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase's microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service.

  12. SAMuS: Service-Oriented Architecture for Multisensor Surveillance in Smart Homes

    PubMed Central

    Van de Walle, Rik

    2014-01-01

    The design of a service-oriented architecture for multisensor surveillance in smart homes is presented as an integrated solution enabling automatic deployment, dynamic selection, and composition of sensors. Sensors are implemented as Web-connected devices, with a uniform Web API. RESTdesc is used to describe the sensors and a novel solution is presented to automatically compose Web APIs that can be applied with existing Semantic Web reasoners. We evaluated the solution by building a smart Kinect sensor that is able to dynamically switch between IR and RGB and optimizing person detection by incorporating feedback from pressure sensors, as such demonstrating the collaboration among sensors to enhance detection of complex events. The performance results show that the platform scales for many Web APIs as composition time remains limited to a few hundred milliseconds in almost all cases. PMID:24778579

  13. CMR Catalog Service for the Web

    NASA Technical Reports Server (NTRS)

    Newman, Doug; Mitchell, Andrew

    2016-01-01

    With the impending retirement of Global Change Master Directory (GCMD) Application Programming Interfaces (APIs) the Common Metadata Repository (CMR) was charged with providing a collection-level Catalog Service for the Web (CSW) that provided the same level of functionality as GCMD. This talk describes the capabilities of the CMR CSW API with particular reference to the support of the Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS) Integrated Catalog (CWIC).

  14. Using the RxNorm web services API for quality assurance purposes.

    PubMed

    Peters, Lee; Bodenreider, Olivier

    2008-11-06

    Auditing large, rapidly evolving terminological systems is still a challenge. In the case of RxNorm, a standardized nomenclature for clinical drugs, we argue that quality assurance processes can benefit from the recently released application programming interface (API) provided by RxNav. We demonstrate the usefulness of the API by performing a systematic comparison of alternative paths in the RxNorm graph, over several thousands of drug entities. This study revealed potential errors in RxNorm, currently under review. The results also prompted us to modify the implementation of RxNav to navigate the RxNorm graph more accurately. The RxNav web services API used in this experiment is robust and fast.

  15. Using the RxNorm Web Services API for Quality Assurance Purposes

    PubMed Central

    Peters, Lee; Bodenreider, Olivier

    2008-01-01

    Auditing large, rapidly evolving terminological systems is still a challenge. In the case of RxNorm, a standardized nomenclature for clinical drugs, we argue that quality assurance processes can benefit from the recently released application programming interface (API) provided by RxNav. We demonstrate the usefulness of the API by performing a systematic comparison of alternative paths in the RxNorm graph, over several thousands of drug entities. This study revealed potential errors in RxNorm, currently under review. The results also prompted us to modify the implementation of RxNav to navigate the RxNorm graph more accurately. The RxNorm web services API used in this experiment is robust and fast. PMID:18999038

  16. A RESTful API for accessing microbial community data for MG-RAST

    DOE PAGES

    Wilke, Andreas; Bischof, Jared; Harrison, Travis; ...

    2015-01-08

    Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MGRAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, asmore » well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http:// kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase’s microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service.« less

  17. A RESTful API for Accessing Microbial Community Data for MG-RAST

    PubMed Central

    Wilke, Andreas; Bischof, Jared; Harrison, Travis; Brettin, Tom; D'Souza, Mark; Gerlach, Wolfgang; Matthews, Hunter; Paczian, Tobias; Wilkening, Jared; Glass, Elizabeth M.; Desai, Narayan; Meyer, Folker

    2015-01-01

    Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MG-RAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http://kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase's microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service. PMID:25569221

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Lawrence

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and themore » environment relieving developers from implementing such details.« less

  19. Enabling Mobile Air Quality App Development with an AirNow API

    NASA Astrophysics Data System (ADS)

    Dye, T.; White, J. E.; Ludewig, S. A.; Dickerson, P.; Healy, A. N.; West, J. W.; Prince, L. A.

    2013-12-01

    The U.S. Environmental Protection Agency's (EPA) AirNow program works with over 130 participating state, local, and federal air quality agencies to obtain, quality control, and store real-time air quality observations and forecasts. From these data, the AirNow system generates thousands of maps and products each hour. Each day, information from AirNow is published online and in other media to assist the public in making health-based decisions related to air quality. However, an increasing number of people use mobile devices as their primary tool for obtaining information, and AirNow has responded to this trend by publishing an easy-to-use Web API that is useful for mobile app developers. This presentation will describe the various features of the AirNow application programming interface (API), including Representational State Transfer (REST)-type web services, file outputs, and RSS feeds. In addition, a web portal for the AirNow API will be shown, including documentation on use of the system, a query tool for configuring and running web services, and general information about the air quality data and forecasts available. Data published via the AirNow API includes corresponding Air Quality Index (AQI) levels for each pollutant. We will highlight examples of mobile apps that are using the AirNow API to provide location-based, real-time air quality information. Examples will include mobile apps developed for Minnesota ('Minnesota Air') and Washington, D.C. ('Clean Air Partners Air Quality'), and an app developed by EPA ('EPA AirNow').

  20. Reactome Pengine: A web-logic API to the homo sapiens reactome.

    PubMed

    Neaves, Samuel R; Tsoka, Sophia; Millard, Louise A C

    2018-03-30

    Existing ways of accessing data from the Reactome database are limited. Either a researcher is restricted to particular queries defined by a web application programming interface (API), or they have to download the whole database. Reactome Pengine is a web service providing a logic programming based API to the human reactome. This gives researchers greater flexibility in data access than existing APIs, as users can send their own small programs (alongside queries) to Reactome Pengine. The server and an example notebook can be found at https://apps.nms.kcl.ac.uk/reactome-pengine. Source code is available at https://github.com/samwalrus/reactome-pengine and a Docker image is available at https://hub.docker.com/r/samneaves/rp4/ . samuel.neaves@kcl.ac.uk. Supplementary data are available at Bioinformatics online.

  1. ChEMBL web services: streamlining access to drug discovery data and utilities

    PubMed Central

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.

    2015-01-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136

  2. Using ChEMBL web services for building applications and data processing workflows relevant to drug discovery.

    PubMed

    Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew

    2017-08-01

    ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.

  3. GIANT API: an application programming interface for functional genomics

    PubMed Central

    Roberts, Andrew M.; Wong, Aaron K.; Fisk, Ian; Troyanskaya, Olga G.

    2016-01-01

    GIANT API provides biomedical researchers programmatic access to tissue-specific and global networks in humans and model organisms, and associated tools, which includes functional re-prioritization of existing genome-wide association study (GWAS) data. Using tissue-specific interaction networks, researchers are able to predict relationships between genes specific to a tissue or cell lineage, identify the changing roles of genes across tissues and uncover disease-gene associations. Additionally, GIANT API enables computational tools like NetWAS, which leverages tissue-specific networks for re-prioritization of GWAS results. The web services covered by the API include 144 tissue-specific functional gene networks in human, global functional networks for human and six common model organisms and the NetWAS method. GIANT API conforms to the REST architecture, which makes it stateless, cacheable and highly scalable. It can be used by a diverse range of clients including web browsers, command terminals, programming languages and standalone apps for data analysis and visualization. The API is freely available for use at http://giant-api.princeton.edu. PMID:27098035

  4. Programmatic access to logical models in the Cell Collective modeling environment via a REST API.

    PubMed

    Kowal, Bryan M; Schreier, Travis R; Dauer, Joseph T; Helikar, Tomáš

    2016-01-01

    Cell Collective (www.cellcollective.org) is a web-based interactive environment for constructing, simulating and analyzing logical models of biological systems. Herein, we present a Web service to access models, annotations, and simulation data in the Cell Collective platform through the Representational State Transfer (REST) Application Programming Interface (API). The REST API provides a convenient method for obtaining Cell Collective data through almost any programming language. To ensure easy processing of the retrieved data, the request output from the API is available in a standard JSON format. The Cell Collective REST API is freely available at http://thecellcollective.org/tccapi. All public models in Cell Collective are available through the REST API. For users interested in creating and accessing their own models through the REST API first need to create an account in Cell Collective (http://thecellcollective.org). thelikar2@unl.edu. Technical user documentation: https://goo.gl/U52GWo. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Globus: Service and Platform for Research Data Lifecycle Management

    NASA Astrophysics Data System (ADS)

    Ananthakrishnan, R.; Foster, I.

    2017-12-01

    Globus offers a range of data management capabilities to the community as hosted services, encompassing data transfer and sharing, user identity and authorization, and data publication. Globus capabilities are accessible via both a web browser and REST APIs. Web access allows researchers to use Globus capabilities through a software-as-a-service model; and the REST APIs address the needs of developers of research services, who can now use Globus as a platform, outsourcing complex user and data management tasks to Globus services. In this presentation, we review Globus capabilities and outline how it is being applied as a platform for scientific services, and highlight work done to link computational analysis flows to the underlying data through an interactive Jupyter notebook environment to promote immediate data usability, reusability of these flows by other researchers, and future analysis extensibility.

  6. ChEMBL web services: streamlining access to drug discovery data and utilities.

    PubMed

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Unifying Access to National Hydrologic Data Repositories via Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.

    2006-12-01

    The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of WaterOneFlow web services is available. More information about the ongoing development of CUAHSI HIS is available from http://www.cuahsi.org/his/.

  8. GIANT API: an application programming interface for functional genomics.

    PubMed

    Roberts, Andrew M; Wong, Aaron K; Fisk, Ian; Troyanskaya, Olga G

    2016-07-08

    GIANT API provides biomedical researchers programmatic access to tissue-specific and global networks in humans and model organisms, and associated tools, which includes functional re-prioritization of existing genome-wide association study (GWAS) data. Using tissue-specific interaction networks, researchers are able to predict relationships between genes specific to a tissue or cell lineage, identify the changing roles of genes across tissues and uncover disease-gene associations. Additionally, GIANT API enables computational tools like NetWAS, which leverages tissue-specific networks for re-prioritization of GWAS results. The web services covered by the API include 144 tissue-specific functional gene networks in human, global functional networks for human and six common model organisms and the NetWAS method. GIANT API conforms to the REST architecture, which makes it stateless, cacheable and highly scalable. It can be used by a diverse range of clients including web browsers, command terminals, programming languages and standalone apps for data analysis and visualization. The API is freely available for use at http://giant-api.princeton.edu. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Home - Energy Innovation Portal

    Science.gov Websites

    tree sapling with a single leaf. Browse 1,225 Technology Marketing Summaries Graphic of a small tree . Learn about 17 Success Stories More Features API API Get technology info via a web service. EERE officials present the Clean Energy Challenge award to a young entrepreneur. EERE Technology-to-Market Visit

  10. elevatr: Access Elevation Data from Various APIs

    EPA Science Inventory

    Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. ...

  11. G2S: a web-service for annotating genomic variants on 3D protein structures.

    PubMed

    Wang, Juexin; Sheridan, Robert; Sumer, S Onur; Schultz, Nikolaus; Xu, Dong; Gao, Jianjiong

    2018-06-01

    Accurately mapping and annotating genomic locations on 3D protein structures is a key step in structure-based analysis of genomic variants detected by recent large-scale sequencing efforts. There are several mapping resources currently available, but none of them provides a web API (Application Programming Interface) that supports programmatic access. We present G2S, a real-time web API that provides automated mapping of genomic variants on 3D protein structures. G2S can align genomic locations of variants, protein locations, or protein sequences to protein structures and retrieve the mapped residues from structures. G2S API uses REST-inspired design and it can be used by various clients such as web browsers, command terminals, programming languages and other bioinformatics tools for bringing 3D structures into genomic variant analysis. The webserver and source codes are freely available at https://g2s.genomenexus.org. g2s@genomenexus.org. Supplementary data are available at Bioinformatics online.

  12. The research and implementation of coalfield spontaneous combustion of carbon emission WebGIS based on Silverlight and ArcGIS server

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Bi, J.; Wang, X.; Zhu, W.

    2014-02-01

    As an important sub-topic of the natural process of carbon emission data public information platform construction, coalfield spontaneous combustion of carbon emission WebGIS system has become an important study object. In connection with data features of coalfield spontaneous combustion carbon emissions (i.e. a wide range of data, which is rich and complex) and the geospatial characteristics, data is divided into attribute data and spatial data. Based on full analysis of the data, completed the detailed design of the Oracle database and stored on the Oracle database. Through Silverlight rich client technology and the expansion of WCF services, achieved the attribute data of web dynamic query, retrieval, statistical, analysis and other functions. For spatial data, we take advantage of ArcGIS Server and Silverlight-based API to invoke GIS server background published map services, GP services, Image services and other services, implemented coalfield spontaneous combustion of remote sensing image data and web map data display, data analysis, thematic map production. The study found that the Silverlight technology, based on rich client and object-oriented framework for WCF service, can efficiently constructed a WebGIS system. And then, combined with ArcGIS Silverlight API to achieve interactive query attribute data and spatial data of coalfield spontaneous emmission, can greatly improve the performance of WebGIS system. At the same time, it provided a strong guarantee for the construction of public information on China's carbon emission data.

  13. Introducing the PRIDE Archive RESTful web services.

    PubMed

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. BioSWR – Semantic Web Services Registry for Bioinformatics

    PubMed Central

    Repchevsky, Dmitry; Gelpi, Josep Ll.

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118

  15. BioSWR--semantic web services registry for bioinformatics.

    PubMed

    Repchevsky, Dmitry; Gelpi, Josep Ll

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  16. Extending Climate Analytics-As to the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; McInerney, M.; Nadeau, D.; Li, J.; Strong, S.; Thompson, J. H.

    2015-12-01

    We are building three extensions to prior-funded work on climate analytics-as-a-service that will benefit the Earth System Grid Federation (ESGF) as it addresses the Big Data challenges of future climate research: (1) We are creating a cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables from six major reanalysis data sets. This near real-time capability will enable advanced technologies like the Cloudera Impala-based Structured Query Language (SQL) query capabilities and Hadoop-based MapReduce analytics over native NetCDF files while providing a platform for community experimentation with emerging analytic technologies. (2) We are building a full-featured Reanalysis Ensemble Service comprising monthly means data from six reanalysis data sets. The service will provide a basic set of commonly used operations over the reanalysis collections. The operations will be made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services (CDS) API. (3) We are establishing an Open Geospatial Consortium (OGC) WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation ESGF capabilities. The CDS API will be extended to accommodate the new WPS Web service endpoints as well as ESGF's Web service endpoints. These activities address some of the most important technical challenges for server-side analytics and support the research community's requirements for improved interoperability and improved access to reanalysis data.

  17. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation

    EPA Science Inventory

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  18. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation.

    EPA Science Inventory

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  19. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability.

    PubMed

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-08-31

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human's daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner.

  20. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation (presentation)

    EPA Science Inventory

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  1. A Java API for working with PubChem datasets.

    PubMed

    Southern, Mark R; Griffin, Patrick R

    2011-03-01

    PubChem is a public repository of chemical structures and associated biological activities. The PubChem BioAssay database contains assay descriptions, conditions and readouts and biological screening results that have been submitted by the biomedical research community. The PubChem web site and Power User Gateway (PUG) web service allow users to interact with the data and raw files are available via FTP. These resources are helpful to many but there can also be great benefit by using a software API to manipulate the data. Here, we describe a Java API with entity objects mapped to the PubChem Schema and with wrapper functions for calling the NCBI eUtilities and PubChem PUG web services. PubChem BioAssays and associated chemical compounds can then be queried and manipulated in a local relational database. Features include chemical structure searching and generation and display of curve fits from stored dose-response experiments, something that is not yet available within PubChem itself. The aim is to provide researchers with a fast, consistent, queryable local resource from which to manipulate PubChem BioAssays in a database agnostic manner. It is not intended as an end user tool but to provide a platform for further automation and tools development. http://code.google.com/p/pubchemdb.

  2. The geospatial data quality REST API for primary biodiversity data

    PubMed Central

    Otegui, Javier; Guralnick, Robert P.

    2016-01-01

    Summary: We present a REST web service to assess the geospatial quality of primary biodiversity data. It enables access to basic and advanced functions to detect completeness and consistency issues as well as general errors in the provided record or set of records. The API uses JSON for data interchange and efficient parallelization techniques for fast assessments of large datasets. Availability and implementation: The Geospatial Data Quality API is part of the VertNet set of APIs. It can be accessed at http://api-geospatial.vertnet-portal.appspot.com/geospatial and is already implemented in the VertNet data portal for quality reporting. Source code is freely available under GPL license from http://www.github.com/vertnet/api-geospatial. Contact: javier.otegui@gmail.com or rguralnick@flmnh.ufl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26833340

  3. The geospatial data quality REST API for primary biodiversity data.

    PubMed

    Otegui, Javier; Guralnick, Robert P

    2016-06-01

    We present a REST web service to assess the geospatial quality of primary biodiversity data. It enables access to basic and advanced functions to detect completeness and consistency issues as well as general errors in the provided record or set of records. The API uses JSON for data interchange and efficient parallelization techniques for fast assessments of large datasets. The Geospatial Data Quality API is part of the VertNet set of APIs. It can be accessed at http://api-geospatial.vertnet-portal.appspot.com/geospatial and is already implemented in the VertNet data portal for quality reporting. Source code is freely available under GPL license from http://www.github.com/vertnet/api-geospatial javier.otegui@gmail.com or rguralnick@flmnh.ufl.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. Vocabulary services to support scientific data interoperability

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Mills, Katie; Tan, Florence

    2013-04-01

    Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their collaborators using SISSvoc3, including: * geologic timescale (multiple versions) * soils classification * definitions from OGC standards * geosciml vocabularies * mining commodities * hyperspectral scalars Several other agencies in Australia have adopted SISSvoc3 for their vocabularies. SISSvoc3 differs from other SKOS-based vocabulary-access APIs such as GEMET [3] and NVS [4] in that (a) the service is decoupled from the content store, (b) the service URI is independent of the content URIs This means that a SISSvoc3 interface can be deployed over any SKOS vocabulary which is available at a SPARQL endpoint. As an example, a SISSvoc3 query and presentation interface has been deployed over the NERC vocabulary service hosted by the BODC, providing a search interface which is not available natively. We use vocabulary services to populate menus in user interfaces, to support data validation, and to configure data conversion routines. Related services built on LDA have also been used as a generic registry interface, and extended for serving gazetteer information. ACKNOWLEDGEMENTS The CSIRO SISSvoc3 implementation is built using the Epimorphics ELDA platform http://code.google.com/p/elda/. We thank Jacqui Githaiga and Terry Rankine for their contributions to SISSvoc design and implementation. REFERENCES 1. SISSvoc3 Specification https://www.seegrid.csiro.au/wiki/Siss/SISSvoc30Specification 2. Linked Data API http://code.google.com/p/linked-data-api/wiki/Specification 3. GEMET https://svn.eionet.europa.eu/projects/Zope/wiki/GEMETWebServiceAPI 4. NVS 2.0 http://vocab.nerc.ac.uk/

  5. FirebrowseR: an R client to the Broad Institute’s Firehose Pipeline

    PubMed Central

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute’s RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package’s features are demonstrated by an example analysis of cancer gene expression data. Database URL: https://github.com/mariodeng/ PMID:28062517

  6. FirebrowseR: an R client to the Broad Institute's Firehose Pipeline.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute's RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package's features are demonstrated by an example analysis of cancer gene expression data.Database URL: https://github.com/mariodeng/. © The Author(s) 2017. Published by Oxford University Press.

  7. A Web Service Protocol Realizing Interoperable Internet of Things Tasking Capability

    PubMed Central

    Huang, Chih-Yuan; Wu, Cheng-Hung

    2016-01-01

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring, and physical mashup applications can be constructed to improve human’s daily life. In general, IoT devices provide two main capabilities: sensing and tasking capabilities. While the sensing capability is similar to the World-Wide Sensor Web, this research focuses on the tasking capability. However, currently, IoT devices created by different manufacturers follow different proprietary protocols and are locked in many closed ecosystems. This heterogeneity issue impedes the interconnection between IoT devices and damages the potential of the IoT. To address this issue, this research aims at proposing an interoperable solution called tasking capability description that allows users to control different IoT devices using a uniform web service interface. This paper demonstrates the contribution of the proposed solution by interconnecting different IoT devices for different applications. In addition, the proposed solution is integrated with the OGC SensorThings API standard, which is a Web service standard defined for the IoT sensing capability. Consequently, the Extended SensorThings API can realize both IoT sensing and tasking capabilities in an integrated and interoperable manner. PMID:27589759

  8. Achieving Better Buying Power for Mobile Open Architecture Software Systems Through Diverse Acquisition Scenarios

    DTIC Science & Technology

    2016-04-30

    software (OSS) and proprietary (CSS) software elements or remote services (Scacchi, 2002, 2010), eventually including recent efforts to support Web ...specific platforms, including those operating on secured Web /mobile devices.  Common Development Technology provides AC development tools and common...transition to OA systems and OSS software elements, specifically for Web and Mobile devices within the realm of C3CB. OA, Open APIs, OSS, and CSS OA

  9. Oceanographic data at your fingertips: the SOCIB App for smartphones

    NASA Astrophysics Data System (ADS)

    Lora, Sebastian; Sebastian, Kristian; Troupin, Charles; Pau Beltran, Joan; Frontera, Biel; Gómara, Sonia; Tintoré, Joaquín

    2015-04-01

    The Balearic Islands Coastal Ocean Observing and Forecasting System (SOCIB, http://www.socib.es), is a multi-platform Marine Research Infrastructure that generates data from nearshore to the open sea in the Western Mediterranean Sea. In line with SOCIB principles of discoverable, freely available and standardized data, an application (App) for smartphones has been designed, with the objective of providing an easy access to all the data managed by SOCIB in real-time: underwater gliders, drifters, profiling buoys, research vessel, HF Radar and numerical model outputs (hydrodynamics and waves). The Data Centre, responsible for the aquisition, processing and visualisation of all SOCIB data, developed a REpresentational State Transfer (REST) application programming interface (API) called "DataDiscovery" (http://apps.socib.es/DataDiscovery/). This API is made up of RESTful web services that provide information on : platforms, instruments, deployments of instruments. It also provides the data themselves. In this way, it is possible to integrate SOCIB data in third-party applications, developed either by the Data Center or externally. The existence of a single point for the data distribution not only allows for an efficient management but also makes easier the concepts and data access for external developers, who are not necessarily familiar with the concepts and tools related to oceanographic or atmospheric data. The SOCIB App for Android (https://play.google.com/store/apps/details?id=com.socib) uses that API as a "data backend", in such a way that it is straightforward to manage which information is shown by the application, without having to modify and upload it again. The only pieces of information that do not depend on the services are the App "Sections" and "Screens", but the content displayed in each of them is obtained through requests to the web services. The API is not used only for the smartphone app: presently, most of SOCIB applications for data visualisation and access rely on the API, for instance: corporative web, deployment Application (Dapp, http://apps.socib.es/dapp/), Sea Boards (http://seaboard.socib.es/).

  10. A Java API for working with PubChem datasets

    PubMed Central

    Southern, Mark R.; Griffin, Patrick R.

    2011-01-01

    Summary: PubChem is a public repository of chemical structures and associated biological activities. The PubChem BioAssay database contains assay descriptions, conditions and readouts and biological screening results that have been submitted by the biomedical research community. The PubChem web site and Power User Gateway (PUG) web service allow users to interact with the data and raw files are available via FTP. These resources are helpful to many but there can also be great benefit by using a software API to manipulate the data. Here, we describe a Java API with entity objects mapped to the PubChem Schema and with wrapper functions for calling the NCBI eUtilities and PubChem PUG web services. PubChem BioAssays and associated chemical compounds can then be queried and manipulated in a local relational database. Features include chemical structure searching and generation and display of curve fits from stored dose–response experiments, something that is not yet available within PubChem itself. The aim is to provide researchers with a fast, consistent, queryable local resource from which to manipulate PubChem BioAssays in a database agnostic manner. It is not intended as an end user tool but to provide a platform for further automation and tools development. Availability: http://code.google.com/p/pubchemdb Contact: southern@scripps.edu PMID:21216779

  11. Tactical Web Services: Using XML and Java Web Services to Conduct Real-Time Net-Centric Sonar Visualization

    DTIC Science & Technology

    2004-09-01

    Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN

  12. eSciMart: Web Platform for Scientific Software Marketplace

    NASA Astrophysics Data System (ADS)

    Kryukov, A. P.; Demichev, A. P.

    2016-10-01

    In this paper we suggest a design of a web marketplace where users of scientific application software and databases, presented in the form of web services, as well as their providers will have presence simultaneously. The model, which will be the basis for the web marketplace is close to the customer-to-customer (C2C) model, which has been successfully used, for example, on the auction sites such as eBay (ebay.com). Unlike the classical model of C2C the suggested marketplace focuses on application software in the form of web services, and standardization of API through which application software will be integrated into the web marketplace. A prototype of such a platform, entitled eSciMart, is currently being developed at SINP MSU.

  13. Unipept web services for metaproteomics analysis.

    PubMed

    Mesuere, Bart; Willems, Toon; Van der Jeugt, Felix; Devreese, Bart; Vandamme, Peter; Dawyndt, Peter

    2016-06-01

    Unipept is an open source web application that is designed for metaproteomics analysis with a focus on interactive datavisualization. It is underpinned by a fast index built from UniProtKB and the NCBI taxonomy that enables quick retrieval of all UniProt entries in which a given tryptic peptide occurs. Unipept version 2.4 introduced web services that provide programmatic access to the metaproteomics analysis features. This enables integration of Unipept functionality in custom applications and data processing pipelines. The web services are freely available at http://api.unipept.ugent.be and are open sourced under the MIT license. Unipept@ugent.be Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. The Vital Planning and Analysis (ViPA) ORBAT Data Service Architecture and Design Overview

    DTIC Science & Technology

    2016-07-01

    ORBAT Data Service is a  Web  Service for storing force structures to be used by the ADF. The service is composed into what is known   as   a   Service...DRN) and the Defence Secret Network (DSN) by Defence project JP2030. The ORBAT Data Service is a  Web  Service for storing force structures to be used by...the ADF.    Web  Services are software used to share business logic and data across a network through   application  programming   interfaces   (APIs

  15. AMBIT RESTful web services: an implementation of the OpenTox application programming interface.

    PubMed

    Jeliazkova, Nina; Jeliazkov, Vedrin

    2011-05-16

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems.

  16. AMBIT RESTful web services: an implementation of the OpenTox application programming interface

    PubMed Central

    2011-01-01

    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems. PMID:21575202

  17. Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)

    NASA Astrophysics Data System (ADS)

    Bhattacharya, D.; M., M.

    2016-06-01

    Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.

  18. A Web-Based Interactive Mapping System of State Wide School Performance: Integrating Google Maps API Technology into Educational Achievement Data

    ERIC Educational Resources Information Center

    Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis

    2008-01-01

    Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…

  19. The Earth Observatory Natural Event Tracker (EONET): An API for Matching Natural Events to GIBS Imagery

    NASA Astrophysics Data System (ADS)

    Ward, K.

    2015-12-01

    Hidden within the terabytes of imagery in NASA's Global Imagery Browse Services (GIBS) collection are hundreds of daily natural events. Some events are newsworthy, devastating, and visibly obvious at a global scale, others are merely regional curiosities. Regardless of the scope and significance of any one event, it is likely that multiple GIBS layers can be viewed to provide a multispectral, dataset-based view of the event. To facilitate linking between the discrete event and the representative dataset imagery, NASA's Earth Observatory Group has developed a prototype application programming interface (API): the Earth Observatory Natural Event Tracker (EONET). EONET supports an API model that allows users to retrieve event-specific metadata--date/time, location, and type (wildfire, storm, etc.)--and web service layer-specific metadata which can be used to link to event-relevant dataset imagery in GIBS. GIBS' ability to ingest many near real time datasets, combined with its growing archive of past imagery, means that API users will be able to develop client applications that not only show ongoing events but can also look at imagery from before and after. In our poster, we will present the API and show examples of its use.

  20. Application and API for Real-time Visualization of Ground-motions and Tsunami

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, T.; Nakamura, H.; Azuma, H.; Fujiwara, H.

    2015-12-01

    Due to the recent progress of seismograph and communication environment, real-time and continuous ground-motion observation becomes technically and economically feasible. K-NET and KiK-net, which are nationwide strong motion networks operated by NIED, cover all Japan by about 1750 stations in total. More than half of the stations transmit the ground-motion indexes and/or waveform data in every second. Traditionally, strong-motion data were recorded by event-triggering based instruments with non-continues telephone line which is connected only after an earthquake. Though the data from such networks mainly contribute to preparations for future earthquakes, huge amount of real-time data from dense network are expected to directly contribute to the mitigation of ongoing earthquake disasters through, e.g., automatic shutdown plants and helping decision-making for initial response. By generating the distribution map of these indexes and uploading them to the website, we implemented the real-time ground motion monitoring system, Kyoshin (strong-motion in Japanese) monitor. This web service (www.kyoshin.bosai.go.jp) started in 2008 and anyone can grasp the current ground motions of Japan. Though this service provides only ground-motion map in GIF format, to take full advantage of real-time strong-motion data to mitigate the ongoing disasters, digital data are important. We have developed a WebAPI to provide real-time data and related information such as ground motions (5 km-mesh) and arrival times estimated from EEW (earthquake early warning). All response data from this WebAPI are in JSON format and are easy to parse. We also developed Kyoshin monitor application for smartphone, 'Kmoni view' using the API. In this application, ground motions estimated from EEW are overlapped on the map with the observed one-second-interval indexes. The application can playback previous earthquakes for demonstration or disaster drill. In mobile environment, data traffic and battery are limited and it is not practical to regularly visualize all the data. The application has automatic starting (pop-up) function triggered by EEW. Similar WebAPI and application for tsunami are being prepared using the pressure data recorded by dense offshore observation network (S-net), which is under construction along the Japan Trench.

  1. Phylesystem: a git-based data store for community-curated phylogenetic estimates.

    PubMed

    McTavish, Emily Jane; Hinchliff, Cody E; Allman, James F; Brown, Joseph W; Cranston, Karen A; Holder, Mark T; Rees, Jonathan A; Smith, Stephen A

    2015-09-01

    Phylogenetic estimates from published studies can be archived using general platforms like Dryad (Vision, 2010) or TreeBASE (Sanderson et al., 1994). Such services fulfill a crucial role in ensuring transparency and reproducibility in phylogenetic research. However, digital tree data files often require some editing (e.g. rerooting) to improve the accuracy and reusability of the phylogenetic statements. Furthermore, establishing the mapping between tip labels used in a tree and taxa in a single common taxonomy dramatically improves the ability of other researchers to reuse phylogenetic estimates. As the process of curating a published phylogenetic estimate is not error-free, retaining a full record of the provenance of edits to a tree is crucial for openness, allowing editors to receive credit for their work and making errors introduced during curation easier to correct. Here, we report the development of software infrastructure to support the open curation of phylogenetic data by the community of biologists. The backend of the system provides an interface for the standard database operations of creating, reading, updating and deleting records by making commits to a git repository. The record of the history of edits to a tree is preserved by git's version control features. Hosting this data store on GitHub (http://github.com/) provides open access to the data store using tools familiar to many developers. We have deployed a server running the 'phylesystem-api', which wraps the interactions with git and GitHub. The Open Tree of Life project has also developed and deployed a JavaScript application that uses the phylesystem-api and other web services to enable input and curation of published phylogenetic statements. Source code for the web service layer is available at https://github.com/OpenTreeOfLife/phylesystem-api. The data store can be cloned from: https://github.com/OpenTreeOfLife/phylesystem. A web application that uses the phylesystem web services is deployed at http://tree.opentreeoflife.org/curator. Code for that tool is available from https://github.com/OpenTreeOfLife/opentree. mtholder@gmail.com. © The Author 2015. Published by Oxford University Press.

  2. Task 28: Web Accessible APIs in the Cloud Trade Study

    NASA Technical Reports Server (NTRS)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  3. Web scraping technologies in an API world.

    PubMed

    Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino

    2014-09-01

    Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. elevatr: Access Elevation Data from Various APIs | Science ...

    EPA Pesticide Factsheets

    Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. Currently, the package supports access to the Mapzen Elevation Service, Mapzen Terrain Service, and the USGS Elevation Point Query Service. The R language for statistical computing is increasingly used for spatial data analysis . This R package, elevatr, is in response to this and provides access to elevation data from various sources directly in R. The impact of `elevatr` is that it will 1) facilitate spatial analysis in R by providing access to foundational dataset for many types of analyses (e.g. hydrology, limnology) 2) open up a new set of users and uses for APIs widely used outside of R, and 3) provide an excellent example federal open source development as promoted by the Federal Source Code Policy (https://sourcecode.cio.gov/).

  5. Improving data discoverability, accessibility, and interoperability with the Esri ArcGIS Platform at the NASA Atmospheric Science Data Center (ASDC).

    NASA Astrophysics Data System (ADS)

    Tisdale, M.

    2017-12-01

    NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying user requirements from government, private, public and academic communities. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), and OGC Web Coverage Services (WCS) while leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams at ASDC are utilizing these services through the development of applications using the Web AppBuilder for ArcGIS and the ArcGIS API for Javascript. These services provide greater exposure of ASDC data holdings to the GIS community and allow for broader sharing and distribution to various end users. These capabilities provide interactive visualization tools and improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry. The presentation will cover how the ASDC is developing geospatial web services and applications to improve data discoverability, accessibility, and interoperability.

  6. Complaint go: an online complaint registration system using web services and android

    NASA Astrophysics Data System (ADS)

    Mareeswari, V.; Gopalakrishnan, V.

    2017-11-01

    In numerous nations, there are city bodies that are the nearby representing bodies that help keep up and run urban communities. These administering bodies are for the most part called MC (Municipal Cooperation). The MC may need to introduce edit cameras and other observation gadgets to guarantee the city is running easily and productively. It is imperative for an MC to know the deficiencies occurring inside the city. As of now, this must be for all intents and purposes conceivable by introducing sensors/cameras and so forth or enabling nationals to straightforwardly address them. The everyday operations and working of the city are taken care by administering bodies which are known as Government Authorities. Presently keeping in mind the end goal to keep up the huge city requires that the Government Authority should know about any issue or deficiency either through (sensors/CCTV cameras) or by enabling the nationals to grumble about these issues. The second choice is generally granted on the grounds that it gives the best possible substantial data. The GA by and large enables its residents to enlist their grievance through a few mediums. In this application, the citizens are facilitated to send the complaints directly from their smartphone to the higher officials. Many APIs are functioning as the web services which are really essential to make it easier to register a complaint such as Google Places API to detect your current location and show that in Map. The Web portal is used to process various complaints well supported with different web services.

  7. DAVID-WS: a stateful web service to facilitate gene/protein list analysis

    PubMed Central

    Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.

    2012-01-01

    Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366

  8. DAVID-WS: a stateful web service to facilitate gene/protein list analysis.

    PubMed

    Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A

    2012-07-01

    The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.

  9. Using RxNorm and NDF-RT to classify medication data extracted from electronic health records: experiences from the Rochester Epidemiology Project.

    PubMed

    Pathak, Jyotishman; Murphy, Sean P; Willaert, Brian N; Kremers, Hilal M; Yawn, Barbara P; Rocca, Walter A; Chute, Christopher G

    2011-01-01

    RxNorm and NDF-RT published by the National Library of Medicine (NLM) and Veterans Affairs (VA), respectively, are two publicly available federal medication terminologies. In this study, we evaluate the applicability of RxNorm and National Drug File-Reference Terminology (NDF-RT) for extraction and classification of medication data retrieved using structured querying and natural language processing techniques from electronic health records at two different medical centers within the Rochester Epidemiology Project (REP). Specifically, we explore how mappings between RxNorm concept codes and NDF-RT drug classes can be leveraged for hierarchical organization and grouping of REP medication data, identify gaps and coverage issues, and analyze the recently released NLM's NDF-RT Web service API. Our study concludes that RxNorm and NDF-RT can be applied together for classification of medication extracted from multiple EHR systems, although several issues and challenges remain to be addressed. We further conclude that the Web service APIs developed by the NLM provide useful functionalities for such activities.

  10. The Geodetic Seamless Archive Centers Service Layer: A System Architecture for Federating Geodesy Data Repositories

    NASA Astrophysics Data System (ADS)

    McWhirter, J.; Boler, F. M.; Bock, Y.; Jamason, P.; Squibb, M. B.; Noll, C. E.; Blewitt, G.; Kreemer, C. W.

    2010-12-01

    Three geodesy Archive Centers, Scripps Orbit and Permanent Array Center (SOPAC), NASA's Crustal Dynamics Data Information System (CDDIS) and UNAVCO are engaged in a joint effort to define and develop a common Web Service Application Programming Interface (API) for accessing geodetic data holdings. This effort is funded by the NASA ROSES ACCESS Program to modernize the original GPS Seamless Archive Centers (GSAC) technology which was developed in the 1990s. A new web service interface, the GSAC-WS, is being developed to provide uniform and expanded mechanisms through which users can access our data repositories. In total, our respective archives hold tens of millions of files and contain a rich collection of site/station metadata. Though we serve similar user communities, we currently provide a range of different access methods, query services and metadata formats. This leads to a lack of consistency in the userís experience and a duplication of engineering efforts. The GSAC-WS API and its reference implementation in an underlying Java-based GSAC Service Layer (GSL) supports metadata and data queries into site/station oriented data archives. The general nature of this API makes it applicable to a broad range of data systems. The overall goals of this project include providing consistent and rich query interfaces for end users and client programs, the development of enabling technology to facilitate third party repositories in developing these web service capabilities and to enable the ability to perform data queries across a collection of federated GSAC-WS enabled repositories. A fundamental challenge faced in this project is to provide a common suite of query services across a heterogeneous collection of data yet enabling each repository to expose their specific metadata holdings. To address this challenge we are developing a "capabilities" based service where a repository can describe its specific query and metadata capabilities. Furthermore, the architecture of the GSL is based on a model-view paradigm that decouples the underlying data model semantics from particular representations of the data model. This will allow for the GSAC-WS enabled repositories to evolve their service offerings to incorporate new metadata definition formats (e.g., ISO-19115, FGDC, JSON, etc.) and new techniques for accessing their holdings. Building on the core GSAC-WS implementations the project is also developing a federated/distributed query service. This service will seamlessly integrate with the GSAC Service Layer and will support data and metadata queries across a collection of federated GSAC repositories.

  11. Using secure web services to visualize poison center data for nationwide biosurveillance: a case study.

    PubMed

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance.

  12. A Data Services Upgrade for Advanced Composition Explorer (ACE) Data

    NASA Astrophysics Data System (ADS)

    Davis, A. J.; Hamell, G.

    2008-12-01

    Since early in 1998, NASA's Advanced Composition Explorer (ACE) spacecraft has provided continuous measurements of solar wind, interplanetary magnetic field, and energetic particle activity from L1, located approximately 0.01 AU sunward of Earth. The spacecraft has enough fuel to stay in orbit about L1 until ~2024. The ACE Science Center (ASC) provides access to ACE data, and performs level 1 and browse data processing for the science instruments. Thanks to a NASA Data Services Upgrade grant, we have recently retooled our legacy web interface to ACE data, enhancing data subsetting capabilities and improving online plotting options. We have also integrated a new application programming interface (API) and we are working to ensure that it will be compatible with emerging Virtual Observatory (VO) data services standards. The new API makes extensive use of metadata created using the Space Physics Archive Search and Extract (SPASE) data model. We describe these recent improvements to the ACE Science Center data services, and our plans for integrating these services into the VO system.

  13. WMT: The CSDMS Web Modeling Tool

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.

  14. Web-based network analysis and visualization using CellMaps

    PubMed Central

    Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín

    2016-01-01

    Summary: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. Availability and Implementation: The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps. The client is implemented in JavaScript and the server in C and Java. Contact: jdopazo@cipf.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27296979

  15. Web-based network analysis and visualization using CellMaps.

    PubMed

    Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín

    2016-10-01

    : CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps The client is implemented in JavaScript and the server in C and Java. jdopazo@cipf.es Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  16. Agricultural Census 2012: Publishing Mashable GIS Big Data Services

    NASA Astrophysics Data System (ADS)

    Mueller, R.

    2014-12-01

    The 2012 Agricultural Census was released by the US Department of Agriculture (USDA) on May 2nd 2014; published on a quinquennial basis covering all facets of American production agriculture. The Agricultural Census is a comprehensive source of uniform published agricultural data for every state and county in the US. This is the first Agricultural Census that is disseminated with web mapping services using REST APIs. USDA developed an open GIS mashable web portal that depicts over 250 maps on Crops and Plants, Economics, Farms, Livestock and Animals, and Operators. These mapping services written in JavaScript replace the traditional static maps published as the Ag Atlas. Web users can now visualize, interact, query, and download the Agricultural Census data in a means not previously discoverable. Stakeholders will now be able to leverage this data for activities such as community planning, agribusiness location suitability analytics, availability of loans/funds, service center locations and staffing, and farm programs and policies. Additional sites serving compatible mashable USDA Big Data web services are as follows: The Food Environment Atlas, The Atlas of Rural and Small-Town America, The Farm Program Atlas, SNAP Data System, CropScape, and VegScape. All portals use a similar data organization scheme of "Categories" and "Maps" providing interactive mashable web services for agricultural stakeholders to exploit.

  17. The Ensembl REST API: Ensembl Data for Any Language.

    PubMed

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  18. Using Secure Web Services to Visualize Poison Center Data for Nationwide Biosurveillance: A Case Study

    PubMed Central

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M. Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Objectives Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. Methods As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Results Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. Discussion This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Conclusion Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance. PMID:23569581

  19. Detecting Runtime Anomalies in AJAX Applications through Trace Analysis

    DTIC Science & Technology

    2011-08-10

    statements by adding the instrumentation to the GWT UI classes, leaving the user code untouched. Some content management frameworks such as Drupal [12...Google web toolkit.” http://code.google.com/webtoolkit/. [12] “Form generation – drupal api.” http://api.drupal.org/api/group/form_api/6. 9

  20. The Climate Data Analytic Services (CDAS) Framework.

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; Duffy, D.

    2016-12-01

    Faced with unprecedented growth in climate data volume and demand, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute data processing workflows combining common analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted climate data analysis tools (ESMF, CDAT, NCO, etc.). A dynamic caching architecture enables interactive response times. CDAS utilizes Apache Spark for parallelization and a custom array framework for processing huge datasets within limited memory spaces. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using either direct web service calls, a python script, a unix-like shell client, or a javascript-based web application. Client packages in python, scala, or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends and variability, and compare multiple reanalysis datasets.

  1. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447

  2. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    PubMed

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.

  3. How NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying government, private, public and academic communities' driven requirements.

    NASA Astrophysics Data System (ADS)

    Tisdale, M.

    2016-12-01

    NASA's Atmospheric Science Data Center (ASDC) is operationally using the Esri ArcGIS Platform to improve data discoverability, accessibility and interoperability to meet the diversifying government, private, public and academic communities' driven requirements. The ASDC is actively working to provide their mission essential datasets as ArcGIS Image Services, Open Geospatial Consortium (OGC) Web Mapping Services (WMS), OGC Web Coverage Services (WCS) and leveraging the ArcGIS multidimensional mosaic dataset structure. Science teams and ASDC are utilizing these services, developing applications using the Web AppBuilder for ArcGIS and ArcGIS API for Javascript, and evaluating restructuring their data production and access scripts within the ArcGIS Python Toolbox framework and Geoprocessing service environment. These capabilities yield a greater usage and exposure of ASDC data holdings and provide improved geospatial analytical tools for a mission critical understanding in the areas of the earth's radiation budget, clouds, aerosols, and tropospheric chemistry.

  4. Development of NETCONF-Based Network Management Systems in Web Services Framework

    NASA Astrophysics Data System (ADS)

    Iijima, Tomoyuki; Kimura, Hiroyasu; Kitani, Makoto; Atarashi, Yoshifumi

    To develop a network management system (NMS) more easily, the authors developed an application programming interface (API) for configuring network devices. Because this API is used in a Java development environment, an NMS can be developed by utilizing the API and other commonly available Java libraries. It is thus possible to easily develop an NMS that is highly compatible with other IT systems. And operations that are generated from the API and that are exchanged between the NMS and network devices are based on NETCONF, which is standardized by the Internet Engineering Task Force (IETF) as a next-generation network-configuration protocol. Adopting a standardized technology ensures that the NMS developed by using the API can manage network devices provided from multi-vendors in a unified manner. Furthermore, the configuration items exchanged over NETCONF are specified in an object-oriented design. They are therefore easier to manage than such items in the Management Information Base (MIB), which is defined as data to be managed by the Simple Network Management Protocol (SNMP). We actually developed several NMSs by using the API. Evaluation of these NMSs showed that, in terms of configuration time and development time, the NMS developed by using the API performed as well as NMSs developed by using a command line interface (CLI) and SNMP. The NMS developed by using the API showed feasibility to achieve “autonomic network management” and “high interoperability with IT systems.”

  5. CernVM WebAPI - Controlling Virtual Machines from the Web

    NASA Astrophysics Data System (ADS)

    Charalampidis, I.; Berzano, D.; Blomer, J.; Buncic, P.; Ganis, G.; Meusel, R.; Segal, B.

    2015-12-01

    Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.

  6. A JEE RESTful service to access Conditions Data in ATLAS

    NASA Astrophysics Data System (ADS)

    Formica, Andrea; Gallas, E. J.

    2015-12-01

    Usage of condition data in ATLAS is extensive for offline reconstruction and analysis (e.g. alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via REST services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemas via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browsers).

  7. The Modern Research Data Portal: A Design Pattern for Networked, Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; Dart, Eli; Foster, Ian

    Here we describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  8. The Modern Research Data Portal: a design pattern for networked, data-intensive science

    DOE PAGES

    Chard, Kyle; Dart, Eli; Foster, Ian; ...

    2018-01-15

    We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. Here, we capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site,https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  9. The Modern Research Data Portal: a design pattern for networked, data-intensive science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; Dart, Eli; Foster, Ian

    We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. Here, we capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe howmore » to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site,https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.« less

  10. OpenSearch (ECHO-ESIP) & REST API for Earth Science Data Access

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will provide a brief technical overview of OpenSearch, the Earth Science Information Partners (ESIP) Federated Search framework, and the REST architecture; discuss NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) implementation lessons learned; and demonstrate the simplified usage of these technologies. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. As a technical solution, SOAP has been a reliable framework on top of which many applications have been successfully developed and deployed. However, as interest grows for quick development cycles and more intriguing “mashups,” the SOAP API loses its appeal. Lightweight and simple are the vogue characteristics that are sought after. Enter the REST API architecture and OpenSearch format. Both of these items provide a new path for application development addressing some of the issues unresolved by SOAP. ECHO has made available all of its discovery, order submission, and data management services through a publicly accessible SOAP API. This interface is utilized by a variety of ECHO client and data partners to provide valuable capabilities to end users. As ECHO interacted with current and potential partners looking to develop Earth Science tools utilizing ECHO, it became apparent that the development overhead required to interact with the SOAP API was a growing barrier to entry. ECHO acknowledged the technical issues that were being uncovered by its partner community and chose to provide two new interfaces for interacting with the ECHO metadata catalog. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. Leveraging these two items, a client (ECHO-ESIP) was developed with a focus on simplified searching and results presentation. The second interface is built upon the Representational State Transfer (REST) architecture. Leveraging the REST architecture, a new API has been made available that will provide access to the entire SOAP API suite of services. The results of these development activities has not only positioned to engage in the thriving world of mashup applications, but also provided an excellent real-world case study of how to successfully leverage these emerging technologies.

  11. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  12. Globus platform-as-a-service for collaborative science applications

    DOE PAGES

    Ananthakrishnan, Rachana; Chard, Kyle; Foster, Ian; ...

    2014-03-13

    Globus, developed as Software-as-a-Service (SaaS) for research data management, also provides APIs that constitute a flexible and powerful Platform-as-a-Service (PaaS) to which developers can outsource data management activities such as transfer and sharing, as well as identity, profile and group management. By providing these frequently important but always challenging capabilities as a service, accessible over the network, Globus PaaS streamlines web application development and makes it easy for individuals, teams, and institutions to create collaborative applications such as science gateways for science communities. We introduce the capabilities of this platform and review representative applications.

  13. Globus Platform-as-a-Service for Collaborative Science Applications.

    PubMed

    Ananthakrishnan, Rachana; Chard, Kyle; Foster, Ian; Tuecke, Steven

    2015-02-01

    Globus, developed as Software-as-a-Service (SaaS) for research data management, also provides APIs that constitute a flexible and powerful Platform-as-a-Service (PaaS) to which developers can outsource data management activities such as transfer and sharing, as well as identity, profile and group management. By providing these frequently important but always challenging capabilities as a service, accessible over the network, Globus PaaS streamlines web application development and makes it easy for individuals, teams, and institutions to create collaborative applications such as science gateways for science communities. We introduce the capabilities of this platform and review representative applications.

  14. Globus Platform-as-a-Service for Collaborative Science Applications

    PubMed Central

    Ananthakrishnan, Rachana; Chard, Kyle; Foster, Ian; Tuecke, Steven

    2014-01-01

    Globus, developed as Software-as-a-Service (SaaS) for research data management, also provides APIs that constitute a flexible and powerful Platform-as-a-Service (PaaS) to which developers can outsource data management activities such as transfer and sharing, as well as identity, profile and group management. By providing these frequently important but always challenging capabilities as a service, accessible over the network, Globus PaaS streamlines web application development and makes it easy for individuals, teams, and institutions to create collaborative applications such as science gateways for science communities. We introduce the capabilities of this platform and review representative applications. PMID:25642152

  15. The Ensembl REST API: Ensembl Data for Any Language

    PubMed Central

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R. S.; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    Motivation: We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. Availability and implementation: The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. Contact: ayates@ebi.ac.uk or flicek@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25236461

  16. The CMS dataset bookkeeping service

    NASA Astrophysics Data System (ADS)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  17. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    NASA Astrophysics Data System (ADS)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The extracted data is returned as structured data (for instance a Python array) which can then be passed directly to local processing code. We will highlight how the libraries can be used by the community and integrated into existing systems, for instance by the use of Jupyter notebooks to share Python code examples which can then be used by other researchers as a basis for their own work.

  18. a Web Api and Web Application Development for Dissemination of Air Quality Information

    NASA Astrophysics Data System (ADS)

    Şahin, K.; Işıkdağ, U.

    2017-11-01

    Various studies have been carried out since 2005 under the leadership of Ministry of Environment and Urbanism of Turkey, in order to observe the quality of air in Turkey, to develop new policies and to develop a sustainable air quality management strategy. For this reason, a national air quality monitoring network has been developed providing air quality indices. By this network, the quality of the air has been continuously monitored and an important information system has been constructed in order to take precautions for preventing a dangerous situation. The biggest handicap in the network is the data access problem for instant and time series data acquisition and processing because of its proprietary structure. Currently, there is no service offered by the current air quality monitoring system for exchanging information with third party applications. Within the context of this work, a web service has been developed to enable location based querying of the current/past air quality data in Turkey. This web service is equipped with up-todate and widely preferred technologies. In other words, an architecture is chosen in which applications can easily integrate. In the second phase of the study, a web-based application was developed to test the developed web service and this testing application can perform location based acquisition of air-quality data. This makes it possible to easily carry out operations such as screening and examination of the area in the given time-frame which cannot be done with the national monitoring network.

  19. The new Gemini Observatory archive: a fast and low cost observatory data archive running in the cloud

    NASA Astrophysics Data System (ADS)

    Hirst, Paul; Cardenes, Ricardo

    2016-08-01

    We have developed and deployed a new data archive for the Gemini Observatory. Focused on simplicity and ease of use, the archive provides a number of powerful and novel features including automatic association of calibration data with the science data, and the ability to bookmark searches. A simple but powerful API allows programmatic search and download of data. The archive is hosted on Amazon Web Services, which provides us excellent internet connectivity and significant cost savings in both operations and development over more traditional deployment options. The code is written in python, utilizing a PostgreSQL database and Apache web server.

  20. Software Applications to Access Earth Science Data: Building an ECHO Client

    NASA Astrophysics Data System (ADS)

    Cohen, A.; Cechini, M.; Pilone, D.

    2010-12-01

    Historically, developing an ECHO (NASA’s Earth Observing System (EOS) ClearingHOuse) client required interaction with its SOAP API. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. However, as interest has grown for quick development cycles and more intriguing “mashups,” ECHO has seen the SOAP API lose its appeal. In order to address these changing needs, ECHO has introduced two new interfaces facilitating simple access to its metadata holdings. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. The second interface is built upon the Representational State Transfer (REST) architecture. Using the REST and OpenSearch APIs to access ECHO makes development with modern languages much more feasible and simpler. Client developers can leverage the simple interaction with ECHO to focus more of their time on the advanced functionality they are presenting to users. To demonstrate the simplicity of developing with the REST API, participants will be led through a hands-on experience where they will develop an ECHO client that performs the following actions: + Login + Provider discovery + Provider based dataset discovery + Dataset, Temporal, and Spatial constraint based Granule discovery + Online Data Access

  1. How Accurately Can the Google Web Speech API Recognize and Transcribe Japanese L2 English Learners' Oral Production?

    ERIC Educational Resources Information Center

    Ashwell, Tim; Elam, Jesse R.

    2017-01-01

    The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…

  2. Multilingual Speech and Language Processing

    DTIC Science & Technology

    2003-04-01

    client software handles the user end of the transaction. Historically, four clients were provided: e-mail, web, FrameMaker , and command line. By...command-line client and an API. The API allows integration of CyberTrans into a number of processes including word processing packages ( FrameMaker ...preservation and logging, and others. The available clients remain e-mail, Web and FrameMaker . Platforms include both Unix and PC for clients, with

  3. The Earth Data Analytic Services (EDAS) Framework

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; Duffy, D.

    2017-12-01

    Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.

  4. Introducing a Web API for Dataset Submission into a NASA Earth Science Data Center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Quach, N.; Francis-Curley, W.

    2016-12-01

    As the landscape of data becomes increasingly more diverse in the domain of Earth Science, the challenges of managing and preserving data become more onerous and complex, particularly for data centers on fixed budgets and limited staff. Many solutions already exist to ease the cost burden for the downstream component of the data lifecycle, yet most archive centers are still racing to keep up with the influx of new data that still needs to find a quasi-permanent resting place. For instance, having well-defined metadata that is consistent across the entire data landscape provides for well-managed and preserved datasets throughout the latter end of the data lifecycle. Translators between different metadata dialects are already in operational use, and facilitate keeping older datasets relevant in today's world of rapidly evolving metadata standards. However, very little is done to address the first phase of the lifecycle, which deals with the entry of both data and the corresponding metadata into a system that is traditionally opaque and closed off to external data producers, thus resulting in a significant bottleneck to the dataset submission process. The ATRAC system was the NOAA NCEI's answer to this previously obfuscated barrier to scientists wishing to find a home for their climate data records, providing a web-based entry point to submit timely and accurate metadata and information about a very specific dataset. A couple of NASA's Distributed Active Archive Centers (DAACs) have implemented their own versions of a web-based dataset and metadata submission form including the ASDC and the ORNL DAAC. The Physical Oceanography DAAC is the most recent in the list of NASA-operated DAACs who have begun to offer their own web-based dataset and metadata submission services to data producers. What makes the PO.DAAC dataset and metadata submission service stand out from these pre-existing services is the option of utilizing both a web browser GUI and a RESTful API to facilitate rapid and efficient updating of dataset metadata records by external data producers. Here we present this new service and demonstrate the variety of ways in which a multitude of Earth Science datasets may be submitted in a manner that significantly reduces the time in ensuring that new, vital data reaches the public domain.

  5. Web Services Provide Access to SCEC Scientific Research Application Software

    NASA Astrophysics Data System (ADS)

    Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.

    2003-12-01

    Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.

  6. A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control

    PubMed Central

    Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole

    2014-01-01

    Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486

  7. An Application Programming Interface for Synthetic Snowflake Particle Structure and Scattering Data

    NASA Technical Reports Server (NTRS)

    Lammers, Matthew; Kuo, Kwo-Sen

    2017-01-01

    The work by Kuo and colleagues on growing synthetic snowflakes and calculating their single-scattering properties has demonstrated great potential to improve the retrievals of snowfall. To grant colleagues flexible and targeted access to their large collection of sizes and shapes at fifteen (15) microwave frequencies, we have developed a web-based Application Programming Interface (API) integrated with NASA Goddard's Precipitation Processing System (PPS) Group. It is our hope that the API will enable convenient programmatic utilization of the database. To help users better understand the API's capabilities, we have developed an interactive web interface called the OpenSSP API Query Builder, which implements an intuitive system of mechanisms for selecting shapes, sizes, and frequencies to generate queries, with which the API can then extract and return data from the database. The Query Builder also allows for the specification of normalized particle size distributions by setting pertinent parameters, with which the API can also return mean geometric and scattering properties for each size bin. Additionally, the Query Builder interface enables downloading of raw scattering and particle structure data packages. This presentation will describe some of the challenges and successes associated with developing such an API. Examples of its usage will be shown both through downloading output and pulling it into a spreadsheet, as well as querying the API programmatically and working with the output in code.

  8. Reuse of the Cloud Analytics and Collaboration Environment within Tactical Applications (TacApps): A Feasibility Analysis

    DTIC Science & Technology

    2016-03-01

    Representational state transfer  Java messaging service  Java application programming interface (API)  Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been

  9. CyanoBase: the cyanobacteria genome database update 2010.

    PubMed

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  10. New Web Services for Broader Access to National Deep Submergence Facility Data Resources Through the Interdisciplinary Earth Data Alliance

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.

    2016-12-01

    The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.

  11. TCIApathfinder: an R client for The Cancer Imaging Archive REST API.

    PubMed

    Russell, Pamela; Fountain, Kelly; Wolverton, Dulcy; Ghosh, Debashis

    2018-06-05

    The Cancer Imaging Archive (TCIA) hosts publicly available de-identified medical images of cancer from over 25 body sites and over 30,000 patients. Over 400 published studies have utilized freely available TCIA images. Images and metadata are available for download through a web interface or a REST API. Here we present TCIApathfinder, an R client for the TCIA REST API. TCIApathfinder wraps API access in user-friendly R functions that can be called interactively within an R session or easily incorporated into scripts. Functions are provided to explore the contents of the large database and to download image files. TCIApathfinder provides easy access to TCIA resources in the highly popular R programming environment. TCIApathfinder is freely available under the MIT license as a package on CRAN (https://cran.r-project.org/web/packages/TCIApathfinder/index.html) and at https://github.com/pamelarussell/TCIApathfinder. Copyright ©2018, American Association for Cancer Research.

  12. The CMS dataset bookkeeping service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afaq, Anzar,; /Fermilab; Dolgert, Andrew

    2007-10-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS ismore » available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.« less

  13. Using Open Web APIs in Teaching Web Mining

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  14. A Conditions Data Management System for HEP Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laycock, P. J.; Dykstra, D.; Formica, A.

    Conditions data infrastructure for both ATLAS and CMS have to deal with the management of several Terabytes of data. Distributed computing access to this data requires particular care and attention to manage request-rates of up to several tens of kHz. Thanks to the large overlap in use cases and requirements, ATLAS and CMS have worked towards a common solution for conditions data management with the aim of using this design for data-taking in Run 3. In the meantime other experiments, including NA62, have expressed an interest in this cross- experiment initiative. For experiments with a smaller payload volume and complexity,more » there is particular interest in simplifying the payload storage. The conditions data management model is implemented in a small set of relational database tables. A prototype access toolkit consisting of an intermediate web server has been implemented, using standard technologies available in the Java community. Access is provided through a set of REST services for which the API has been described in a generic way using standard Open API specications, implemented in Swagger. Such a solution allows the automatic generation of client code and server stubs and further allows changes in the backend technology transparently. An important advantage of using a REST API for conditions access is the possibility of caching identical URLs, addressing one of the biggest challenges that large distributed computing solutions impose on conditions data access, avoiding direct DB access by means of standard web proxy solutions.« less

  15. Rapid Deployment of a RESTful Service for Oceanographic Research Cruises

    NASA Astrophysics Data System (ADS)

    Fu, Linyun; Arko, Robert; Leadbetter, Adam

    2014-05-01

    The Ocean Data Interoperability Platform (ODIP) seeks to increase data sharing across scientific domains and international boundaries, by providing a forum to harmonize diverse regional data systems. ODIP participants from the US include the Rolling Deck to Repository (R2R) program, whose mission is to capture, catalog, and describe the underway/environmental sensor data from US oceanographic research vessels and submit the data to public long-term archives. R2R publishes information online as Linked Open Data, making it widely available using Semantic Web standards. Each vessel, sensor, cruise, dataset, person, organization, funding award, log, report, etc, has a Uniform Resource Identifier (URI). Complex queries that federate results from other data providers are supported, using the SPARQL query language. To facilitate interoperability, R2R uses controlled vocabularies developed collaboratively by the science community (eg. SeaDataNet device categories) and published online by the NERC Vocabulary Server (NVS). In response to user feedback, we are developing a standard programming interface (API) and Web portal for R2R's Linked Open Data. The API provides a set of simple REST-type URLs that are translated on-the-fly into SPARQL queries, and supports common output formats (eg. JSON). We will demonstrate an implementation based on the Epimorphics Linked Data API (ELDA) open-source Java package. Our experience shows that constructing a simple portal with limited schema elements in this way can significantly reduce development time and maintenance complexity.

  16. HMMER web server: 2018 update.

    PubMed

    Potter, Simon C; Luciani, Aurélien; Eddy, Sean R; Park, Youngmi; Lopez, Rodrigo; Finn, Robert D

    2018-06-14

    The HMMER webserver [http://www.ebi.ac.uk/Tools/hmmer] is a free-to-use service which provides fast searches against widely used sequence databases and profile hidden Markov model (HMM) libraries using the HMMER software suite (http://hmmer.org). The results of a sequence search may be summarized in a number of ways, allowing users to view and filter the significant hits by domain architecture or taxonomy. For large scale usage, we provide an application programmatic interface (API) which has been expanded in scope, such that all result presentations are available via both HTML and API. Furthermore, we have refactored our JavaScript visualization library to provide standalone components for different result representations. These consume the aforementioned API and can be integrated into third-party websites. The range of databases that can be searched against has been expanded, adding four sequence datasets (12 in total) and one profile HMM library (6 in total). To help users explore the biological context of their results, and to discover new data resources, search results are now supplemented with cross references to other EMBL-EBI databases.

  17. Developing Cancer Informatics Applications and Tools Using the NCI Genomic Data Commons API.

    PubMed

    Wilson, Shane; Fitzsimons, Michael; Ferguson, Martin; Heath, Allison; Jensen, Mark; Miller, Josh; Murphy, Mark W; Porter, James; Sahni, Himanso; Staudt, Louis; Tang, Yajing; Wang, Zhining; Yu, Christine; Zhang, Junjun; Ferretti, Vincent; Grossman, Robert L

    2017-11-01

    The NCI Genomic Data Commons (GDC) was launched in 2016 and makes available over 4 petabytes (PB) of cancer genomic and associated clinical data to the research community. This dataset continues to grow and currently includes over 14,500 patients. The GDC is an example of a biomedical data commons, which collocates biomedical data with storage and computing infrastructure and commonly used web services, software applications, and tools to create a secure, interoperable, and extensible resource for researchers. The GDC is (i) a data repository for downloading data that have been submitted to it, and also a system that (ii) applies a common set of bioinformatics pipelines to submitted data; (iii) reanalyzes existing data when new pipelines are developed; and (iv) allows users to build their own applications and systems that interoperate with the GDC using the GDC Application Programming Interface (API). We describe the GDC API and how it has been used both by the GDC itself and by third parties. Cancer Res; 77(21); e15-18. ©2017 AACR . ©2017 American Association for Cancer Research.

  18. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    NASA Astrophysics Data System (ADS)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display them all together in a unitary manner using the same visual structure [4]. Addressing the High Power Computation requirements that are met while processing environmental data, our platform can be easily extended through tools that connect to GRID infrastructures, WCS web services, Bashyt API (for creating specialized hydrological reports) or any other specialized services (ex. graphics cluster visualization) that can be reached over the Internet. At run time, on the trainee's computer each tool is launched in an asynchronous running mode and connects to the data source that has been established by the teacher, retrieving and displaying the information to the user. The data transfer is accomplished directly between the trainee's computer and the corresponding services (e.g. OGC, Bashyt API, etc.) without passing through the core server platform. In this manner, the eGLE application can provide better and more responsive connections to a large number of users.

  19. PDBe: towards reusable data delivery infrastructure at protein data bank in Europe

    PubMed Central

    Alhroub, Younes; Anyango, Stephen; Armstrong, David R; Berrisford, John M; Clark, Alice R; Conroy, Matthew J; Dana, Jose M; Gupta, Deepti; Gutmanas, Aleksandras; Haslam, Pauline; Mak, Lora; Mukhopadhyay, Abhik; Nadzirin, Nurul; Paysan-Lafosse, Typhaine; Sehnal, David; Sen, Sanchayita; Smart, Oliver S; Varadi, Mihaly; Kleywegt, Gerard J

    2018-01-01

    Abstract The Protein Data Bank in Europe (PDBe, pdbe.org) is actively engaged in the deposition, annotation, remediation, enrichment and dissemination of macromolecular structure data. This paper describes new developments and improvements at PDBe addressing three challenging areas: data enrichment, data dissemination and functional reusability. New features of the PDBe Web site are discussed, including a context dependent menu providing links to raw experimental data and improved presentation of structures solved by hybrid methods. The paper also summarizes the features of the LiteMol suite, which is a set of services enabling fast and interactive 3D visualization of structures, with associated experimental maps, annotations and quality assessment information. We introduce a library of Web components which can be easily reused to port data and functionality available at PDBe to other services. We also introduce updates to the SIFTS resource which maps PDB data to other bioinformatics resources, and the PDBe REST API. PMID:29126160

  20. Monitoring activities of satellite data processing services in real-time with SDDS Live Monitor

    NASA Astrophysics Data System (ADS)

    Duc Nguyen, Minh

    2017-10-01

    This work describes Live Monitor, the monitoring subsystem of SDDS - an automated system for space experiment data processing, storage, and distribution created at SINP MSU. Live Monitor allows operators and developers of satellite data centers to identify errors occurred in data processing quickly and to prevent further consequences caused by the errors. All activities of the whole data processing cycle are illustrated via a web interface in real-time. Notification messages are delivered to responsible people via emails and Telegram messenger service. The flexible monitoring mechanism implemented in Live Monitor allows us to dynamically change and control events being shown on the web interface on our demands. Physicists, whose space weather analysis models are functioning upon satellite data provided by SDDS, can use the developed RESTful API to monitor their own events and deliver customized notification messages by their needs.

  1. CyanoBase: the cyanobacteria genome database update 2010

    PubMed Central

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly. PMID:19880388

  2. Developing cloud applications using the e-Science Central platform.

    PubMed

    Hiden, Hugo; Woodman, Simon; Watson, Paul; Cala, Jacek

    2013-01-28

    This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.

  3. Developing cloud applications using the e-Science Central platform

    PubMed Central

    Hiden, Hugo; Woodman, Simon; Watson, Paul; Cala, Jacek

    2013-01-01

    This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction. PMID:23230161

  4. An Android based location service using GSMCellID and GPS to obtain a graphical guide to the nearest cash machine

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jurma; Edlich, Stefan

    2009-02-01

    There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.

  5. a Map Mash-Up Application: Investigation the Temporal Effects of Climate Change on Salt Lake Basin

    NASA Astrophysics Data System (ADS)

    Kirtiloglu, O. S.; Orhan, O.; Ekercin, S.

    2016-06-01

    The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB) located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS) environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI) in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI), which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google's free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective "Map Mash-Ups" involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.

  6. Commercial Building Energy Saver, API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    2015-08-27

    The CBES API provides Application Programming Interface to a suite of functions to improve energy efficiency of buildings, including building energy benchmarking, preliminary retrofit analysis using a pre-simulation database DEEP, and detailed retrofit analysis using energy modeling with the EnergyPlus simulation engine. The CBES API is used to power the LBNL CBES Web App. It can be adopted by third party developers and vendors into their software tools and platforms.

  7. New SPDF Directions and Evolving Services Supporting Heliophysics Research

    NASA Technical Reports Server (NTRS)

    McGuire, Robert E.; Candey, Robert M.; Bilitza, D.; Chimiak, Reine A.; Cooper, John F.; Fung, Shing F.; Han, David B.; Harris, Bernie; Johnson R.; Klipsch, C.; hide

    2006-01-01

    The next advances in Heliophysics science and its paradigm of a Great Observatory require an increasingly integrated and transparent data environment, where data can be easily accessed and used across the boundaries of both missions and traditional disciplines. The Space Physics Data Facility (SPDF) project includes uniquely important multi-mission data services with current data from most operating space physics missions. This paper reviews the capabilities of key services now available and the directions in which they are expected to evolve to enable future multi-mission correlative research. The Coordinated Data Analysis Web (CDAWeb) and Satellite Situation Center Web (SSCWeb), critically supported by the Common Data Format (CDF) effort and supplemented by more focused science services such as OMNIWeb and technical services such as data format translations are important operational capabilities serving the international community today (and cited last year by 20% of the papers published in JGR Space Physics). These services continue to add data from most current missions as SPDF works with new missions such as THEMIS to help enable their unique science goals and the meaningful sharing of their data in a multi-mission correlative context. Recent enhancements to CDF, our 3D Java interactive orbit viewer (TIPSOD), the CDAWeb Plus system, increasing automation of data service population, the new folding of the VSPO effort into SPDF and our continuing thrust towards fully-functional web services APIs to allow ready invocation from distributed external middleware and clients will be shown.

  8. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  9. Collaborative development of predictive toxicology applications

    PubMed Central

    2010-01-01

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way. PMID:20807436

  10. Collaborative development of predictive toxicology applications.

    PubMed

    Hardy, Barry; Douglas, Nicki; Helma, Christoph; Rautenberg, Micha; Jeliazkova, Nina; Jeliazkov, Vedrin; Nikolova, Ivelina; Benigni, Romualdo; Tcheremenskaia, Olga; Kramer, Stefan; Girschick, Tobias; Buchwald, Fabian; Wicker, Joerg; Karwath, Andreas; Gütlein, Martin; Maunz, Andreas; Sarimveis, Haralambos; Melagraki, Georgia; Afantitis, Antreas; Sopasakis, Pantelis; Gallagher, David; Poroikov, Vladimir; Filimonov, Dmitry; Zakharov, Alexey; Lagunin, Alexey; Gloriozova, Tatyana; Novikov, Sergey; Skvortsova, Natalia; Druzhilovsky, Dmitry; Chawla, Sunil; Ghosh, Indira; Ray, Surajit; Patel, Hitesh; Escher, Sylvia

    2010-08-31

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.

  11. Spatial data standards meet meteorological data - pushing the boundaries

    NASA Astrophysics Data System (ADS)

    Wagemann, Julia; Siemen, Stephan; Lamy-Thepaut, Sylvie

    2017-04-01

    The data archive of the European Centre for Medium-Range Weather Forecasts (ECMWF) holds around 120 PB of data and is world's largest archive of meteorological data. This information is of great value for many Earth Science disciplines, but the complexity of the data (up to five dimensions and different time axis domains) and its native data format GRIB, while being an efficient archive format, limits the overall data uptake especially from users outside the MetOcean domain. ECMWF's MARS WebAPI is a very efficient and flexible system for expert users to access and retrieve meteorological data, though challenging for users outside the MetOcean domain. With the help of web-based standards for data access and processing, ECMWF wants to make more than 1 PB of meteorological and climate data easier accessible to users across different Earth Science disciplines. As climate data provider for the H2020 project EarthServer-2, ECMWF explores the feasibility to give on-demand access to it's MARS archive via the OGC standard interface Web Coverage Service (WCS). Despite the potential a WCS for climate and meteorological data offers, the standards-based modelling of meteorological and climate data entails many challenges and reveals the boundaries of the current Web Coverage Service 2.0 standard. Challenges range from valid semantic data models for meteorological data to optimal and efficient data structures for a scalable web service. The presentation reviews the applicability of the current Web Coverage Service 2.0 standard to meteorological and climate data and discusses challenges that are necessary to overcome in order to achieve real interoperability and to ensure the conformant sharing and exchange of meteorological data.

  12. The Virtual Solar Observatory and the Heliophysics Meta-Virtual Observatory

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph B.

    2007-01-01

    The Virtual Solar Observatory (VSO) is now able to search for solar data ranging from the radio to gamma rays, obtained from space and groundbased observatories, from 26 sources at 12 data providers, and from 1915 to the present. The solar physics community can use a Web interface or an Application Programming Interface (API) that allows integrating VSO searches into other software, including other Web services. Over the next few years, this integration will be especially obvious as the NASA Heliophysics division sponsors the development of a heliophysics-wide virtual observatory (VO), based on existing VO's in heliospheric, magnetospheric, and ionospheric physics as well as the VSO. We examine some of the challenges and potential of such a "meta-VO."

  13. Solar Eclipse Computer API: Planning Ahead for August 2017

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Chizek Frouard, Malynda; Lesniak, Michael V.; Bell, Steve

    2016-01-01

    With the total solar eclipse of 2017 August 21 over the continental United States approaching, the U.S. Naval Observatory (USNO) on-line Solar Eclipse Computer can now be accessed via an application programming interface (API). This flexible interface returns local circumstances for any solar eclipse in JavaScript Object Notation (JSON) that can be incorporated into third-party Web sites or applications. For a given year, it can also return a list of solar eclipses that can be used to build a more specific request for local circumstances. Over the course of a particular eclipse as viewed from a specific site, several events may be visible: the beginning and ending of the eclipse (first and fourth contacts), the beginning and ending of totality (second and third contacts), the moment of maximum eclipse, sunrise, or sunset. For each of these events, the USNO Solar Eclipse Computer reports the time, Sun's altitude and azimuth, and the event's position and vertex angles. The computer also reports the duration of the total phase, the duration of the eclipse, the magnitude of the eclipse, and the percent of the Sun obscured for a particular eclipse site. On-line documentation for using the API-enabled Solar Eclipse Computer, including sample calls, is available (http://aa.usno.navy.mil/data/docs/api.php). The same Web page also describes how to reach the Complete Sun and Moon Data for One Day, Phases of the Moon, Day and Night Across the Earth, and Apparent Disk of a Solar System Object services using API calls.For those who prefer using a traditional data input form, local circumstances can still be requested that way at http://aa.usno.navy.mil/data/docs/SolarEclipses.php. In addition, the 2017 August 21 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2017.php) consolidates all of the USNO resources for this event, including a Google Map view of the eclipse track designed by Her Majesty's Nautical Almanac Office (HMNAO). Looking further ahead, a 2024 April 8 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2024.php) is also available.

  14. Development of RESTful services and map-based user interface tools for access and delivery of data and metadata from the Marine-Geo Digital Library

    NASA Astrophysics Data System (ADS)

    Morton, J. J.; Ferrini, V. L.

    2015-12-01

    The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.

  15. Online Maps and Cloud-Supported Location-Based Services across a Manifold of Devices

    NASA Astrophysics Data System (ADS)

    Kröpfl, M.; Buchmüller, D.; Leberl, F.

    2012-07-01

    Online mapping, miniaturization of computing devices, the "cloud", Global Navigation Satellite System (GNSS) and cell tower triangulation all coalesce into an entirely novel infrastructure for numerous innovative map applications. This impacts the planning of human activities, navigating and tracking these activities as they occur, and finally documenting their outcome for either a single user or a network of connected users in a larger context. In this paper, we provide an example of a simple geospatial application making use of this model, which we will use to explain the basic steps necessary to deploy an application involving a web service hosting geospatial information and a client software consuming the web service through an API. The application allows an insurance claim specialist to add claims to a cloud-based database including a claim location. A field agent then uses a smartphone application to query the database by proximity, and heads out to capture photographs as supporting documentation for the claim. Once the photos have been uploaded to the web service, a second web service for image matching is called in order to try and match the current photograph to previously submitted assets. Image matching is used as a pre-verification step to determine whether the coverage of the respective object is sufficient for the claim specialist to process the claim. The development of the application was based on Microsoft's® Bing Maps™, Windows Phone™, Silverlight™, Windows Azure™ and Visual Studio™, and was completed in approximately 30 labour hours split among two developers.

  16. ANTP Protocol Suite Software Implementation Architecture in Python

    DTIC Science & Technology

    2011-06-03

    a popular platform of networking programming, an area in which C has traditionally dominated. 2 NetController AeroRP AeroNP AeroNP API AeroTP...visualisation of the running system. For example using the Google Maps API , the main logging web page can show all the running nodes in the system. By...communication between AeroNP and AeroRP and runs on the operating system as daemon. Furthermore, it creates an API interface to mange the communication between

  17. Sharing environmental models: An Approach using GitHub repositories and Web Processing Services

    NASA Astrophysics Data System (ADS)

    Stasch, Christoph; Nuest, Daniel; Pross, Benjamin

    2016-04-01

    The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project established a spatial data infrastructure for scientific geospatial data and metadata (http://geoportal-glues.ufz.de), where different regional collaborative projects researching the impacts of climate and socio-economic changes on sustainable land management can share their underlying base scenarios and datasets. One goal of the project is to ease the sharing of computational models between institutions and to make them easily executable in Web-based infrastructures. In this work, we present such an approach for sharing computational models relying on GitHub repositories (http://github.com) and Web Processing Services. At first, model providers upload their model implementations to GitHub repositories in order to share them with others. The GitHub platform allows users to submit changes to the model code. The changes can be discussed and reviewed before merging them. However, while GitHub allows sharing and collaborating of model source code, it does not actually allow running these models, which requires efforts to transfer the implementation to a model execution framework. We thus have extended an existing implementation of the OGC Web Processing Service standard (http://www.opengeospatial.org/standards/wps), the 52°North Web Processing Service (http://52north.org/wps) platform to retrieve all model implementations from a git (http://git-scm.com) repository and add them to the collection of published geoprocesses. The current implementation is restricted to models implemented as R scripts using WPS4R annotations (Hinz et al.) and to Java algorithms using the 52°North WPS Java API. The models hence become executable through a standardized Web API by multiple clients such as desktop or browser GIS and modelling frameworks. If the model code is changed on the GitHub platform, the changes are retrieved by the service and the processes will be updated accordingly. The admin tool of the 52°North WPS was extended to support automated retrieval and deployment of computational models from GitHub repositories. Once the R code is available in the GitHub repo, the contained process can be easily deployed and executed by simply defining the GitHub repository URL in the WPS admin tool. We illustrate the usage of the approach by sharing and running a model for land use system archetypes developed by the Helmholtz Centre for Environmental Research (UFZ, see Vaclavik et al.). The original R code was extended and published in the 52°North WPS using both, public and non-public datasets (Nüst et al., see also https://github.com/52North/glues-wps). Hosting the analysis in a Git repository now allows WPS administrators, client developers, and modelers to easily work together on new versions or completely new web processes using the powerful GitHub collaboration platform. References: Hinz, M. et. al. (2013): Spatial Statistics on the Geospatial Web. In: The 16th AGILE International Conference on Geographic Information Science, Short Papers. http://www.agile-online.org/Conference_Paper/CDs/agile_2013/Short_Papers/SP_S3.1_Hinz.pdf Nüst, D. et. al.: (2015): Open and reproducible global land use classification. In: EGU General Assembly Conference Abstracts . Vol. 17. European Geophysical Union, 2015, p. 9125, http://meetingorganizer.copernicus. org/EGU2015/EGU2015- 9125.pdf Vaclavik, T., et. al. (2013): Mapping global land system archetypes. Global Environmental Change 23(6): 1637-1647. Online available: October 9, 2013, DOI: 10.1016/j.gloenvcha.2013.09.004

  18. Commercial Building Energy Saver, Web App

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less

  19. The RNASeq-er API-a gateway to systematically updated analysis of public RNA-seq data.

    PubMed

    Petryszak, Robert; Fonseca, Nuno A; Füllgrabe, Anja; Huerta, Laura; Keays, Maria; Tang, Y Amy; Brazma, Alvis

    2017-07-15

    The exponential growth of publicly available RNA-sequencing (RNA-Seq) data poses an increasing challenge to researchers wishing to discover, analyse and store such data, particularly those based in institutions with limited computational resources. EMBL-EBI is in an ideal position to address these challenges and to allow the scientific community easy access to not just raw, but also processed RNA-Seq data. We present a Web service to access the results of a systematically and continually updated standardized alignment as well as gene and exon expression quantification of all public bulk (and in the near future also single-cell) RNA-Seq runs in 264 species in European Nucleotide Archive, using Representational State Transfer. The RNASeq-er API (Application Programming Interface) enables ontology-powered search for and retrieval of CRAM, bigwig and bedGraph files, gene and exon expression quantification matrices (Fragments Per Kilobase Of Exon Per Million Fragments Mapped, Transcripts Per Million, raw counts) as well as sample attributes annotated with ontology terms. To date over 270 00 RNA-Seq runs in nearly 10 000 studies (1PB of raw FASTQ data) in 264 species in ENA have been processed and made available via the API. The RNASeq-er API can be accessed at http://www.ebi.ac.uk/fg/rnaseq/api . The commands used to analyse the data are available in supplementary materials and at https://github.com/nunofonseca/irap/wiki/iRAP-single-library . rnaseq@ebi.ac.uk ; rpetry@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. OpenFIRE - A Web GIS Service for Distributing the Finnish Reflection Experiment Datasets

    NASA Astrophysics Data System (ADS)

    Väkevä, Sakari; Aalto, Aleksi; Heinonen, Aku; Heikkinen, Pekka; Korja, Annakaisa

    2017-04-01

    The Finnish Reflection Experiment (FIRE) is a land-based deep seismic reflection survey conducted between 2001 and 2003 by a research consortium of the Universities of Helsinki and Oulu, the Geological Survey of Finland, and a Russian state-owned enterprise SpetsGeofysika. The dataset consists of 2100 kilometers of high-resolution profiles across the Archaean and Proterozoic nuclei of the Fennoscandian Shield. Although FIRE data have been available on request since 2009, the data have remained underused outside the original research consortium. The original FIRE data have been quality-controlled. The shot gathers have been cross-checked and comprehensive errata has been created. The brute stacks provided by the Russian seismic contractor have been reprocessed into seismic sections and replotted. A complete documentation of the intermediate processing steps is provided together with guidelines for setting up a computing environment and plotting the data. An open access web service "OpenFIRE" for the visualization and the downloading of FIRE data has been created. The service includes a mobile-responsive map application capable of enriching seismic sections with data from other sources such as open data from the National Land Survey and the Geological Survey of Finland. The AVAA team of the Finnish Open Science and Research Initiative has provided a tailored Liferay portal with necessary web components such as an API (Application Programming Interface) for download requests. INSPIRE (Infrastructure for Spatial Information in Europe) -compliant discovery metadata have been produced and geospatial data will be exposed as Open Geospatial Consortium standard services. The technical guidelines of the European Plate Observing System have been followed and the service could be considered as a reference application for sharing reflection seismic data. The OpenFIRE web service is available at www.seismo.helsinki.fi/openfire

  1. Design and Implement AN Interoperable Internet of Things Application Based on AN Extended Ogc Sensorthings Api Standard

    NASA Astrophysics Data System (ADS)

    Huang, C. Y.; Wu, C. H.

    2016-06-01

    The Internet of Things (IoT) is an infrastructure that interconnects uniquely-identifiable devices using the Internet. By interconnecting everyday appliances, various monitoring and physical mashup applications can be constructed to improve people's daily life. However, IoT devices created by different manufacturers follow different proprietary protocols and cannot communicate with each other. This heterogeneity issue causes different products to be locked in multiple closed ecosystems that we call IoT silos. In order to address this issue, a common industrial solution is the hub approach, which implements connectors to communicate with IoT devices following different protocols. However, with the growing number of proprietary protocols proposed by device manufacturers, IoT hubs need to support and maintain a lot of customized connectors. Hence, we believe the ultimate solution to address the heterogeneity issue is to follow open and interoperable standard. Among the existing IoT standards, the Open Geospatial Consortium (OGC) SensorThings API standard supports comprehensive conceptual model and query functionalities. The first version of SensorThings API mainly focuses on connecting to IoT devices and sharing sensor observations online, which is the sensing capability. Besides the sensing capability, IoT devices could also be controlled via the Internet, which is the tasking capability. While the tasking capability was not included in the first version of the SensorThings API standard, this research aims on defining the tasking capability profile and integrates with the SensorThings API standard, which we call the extended-SensorThings API in this paper. In general, this research proposes a lightweight JSON-based web service description, the "Tasking Capability Description", allowing device owners and manufacturers to describe different IoT device protocols. Through the extended- SensorThings API, users and applications can follow a coherent protocol to control IoT devices that use different communication protocols, which could consequently achieve the interoperable Internet of Things infrastructure.

  2. Development of a GIS-based integrated framework for coastal seiches monitoring and forecasting: A North Jiangsu shoal case study

    NASA Astrophysics Data System (ADS)

    Qin, Rufu; Lin, Liangzhao

    2017-06-01

    Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.

  3. Chemozart: a web-based 3D molecular structure editor and visualizer platform.

    PubMed

    Mohebifar, Mohamad; Sajadi, Fatemehsadat

    2015-01-01

    Chemozart is a 3D Molecule editor and visualizer built on top of native web components. It offers an easy to access service, user-friendly graphical interface and modular design. It is a client centric web application which communicates with the server via a representational state transfer style web service. Both client-side and server-side application are written in JavaScript. A combination of JavaScript and HTML is used to draw three-dimensional structures of molecules. With the help of WebGL, three-dimensional visualization tool is provided. Using CSS3 and HTML5, a user-friendly interface is composed. More than 30 packages are used to compose this application which adds enough flexibility to it to be extended. Molecule structures can be drawn on all types of platforms and is compatible with mobile devices. No installation is required in order to use this application and it can be accessed through the internet. This application can be extended on both server-side and client-side by implementing modules in JavaScript. Molecular compounds are drawn on the HTML5 Canvas element using WebGL context. Chemozart is a chemical platform which is powerful, flexible, and easy to access. It provides an online web-based tool used for chemical visualization along with result oriented optimization for cloud based API (application programming interface). JavaScript libraries which allow creation of web pages containing interactive three-dimensional molecular structures has also been made available. The application has been released under Apache 2 License and is available from the project website https://chemozart.com.

  4. Tank Information System (tis): a Case Study in Migrating Web Mapping Application from Flex to Dojo for Arcgis Server and then to Open Source

    NASA Astrophysics Data System (ADS)

    Pulsani, B. R.

    2017-11-01

    Tank Information System is a web application which provides comprehensive information about minor irrigation tanks of Telangana State. As part of the program, a web mapping application using Flex and ArcGIS server was developed to make the data available to the public. In course of time as Flex be-came outdated, a migration of the client interface to the latest JavaScript based technologies was carried out. Initially, the Flex based application was migrated to ArcGIS JavaScript API using Dojo Toolkit. Both the client applications used published services from ArcGIS server. To check the migration pattern from proprietary to open source, the JavaScript based ArcGIS application was later migrated to OpenLayers and Dojo Toolkit which used published service from GeoServer. The migration pattern noticed in the study especially emphasizes upon the use of Dojo Toolkit and PostgreSQL database for ArcGIS server so that migration to open source could be performed effortlessly. The current ap-plication provides a case in study which could assist organizations in migrating their proprietary based ArcGIS web applications to open source. Furthermore, the study reveals cost benefits of adopting open source against commercial software's.

  5. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  6. Design and Implementation of Surrounding Transaction Plotting and Management System Based on Google Map API

    NASA Astrophysics Data System (ADS)

    Cao, Y. B.; Hua, Y. X.; Zhao, J. X.; Guo, S. M.

    2013-11-01

    With China's rapid economic development and comprehensive national strength growing, Border work has become a long-term and important task in China's diplomatic work. How to implement rapid plotting, real-time sharing and mapping surrounding affairs has taken great significance for government policy makers and diplomatic staff. However, at present the already exists Boundary information system are mainly have problems of Geospatial data update is heavily workload, plotting tools are in a state of serious lack of, Geographic events are difficult to share, this phenomenon has seriously hampered the smooth development of the border task. The development and progress of Geographic information system technology especially the development of Web GIS offers the possibility to solve the above problems, this paper adopts four layers of B/S architecture, with the support of Google maps service, uses the free API which is offered by Google maps and its features of openness, ease of use, sharing characteristics, highresolution images to design and implement the surrounding transaction plotting and management system based on the web development technology of ASP.NET, C#, Ajax. The system can provide decision support for government policy makers as well as diplomatic staff's real-time plotting and sharing of surrounding information. The practice has proved that the system has good usability and strong real-time.

  7. PDBe: towards reusable data delivery infrastructure at protein data bank in Europe.

    PubMed

    Mir, Saqib; Alhroub, Younes; Anyango, Stephen; Armstrong, David R; Berrisford, John M; Clark, Alice R; Conroy, Matthew J; Dana, Jose M; Deshpande, Mandar; Gupta, Deepti; Gutmanas, Aleksandras; Haslam, Pauline; Mak, Lora; Mukhopadhyay, Abhik; Nadzirin, Nurul; Paysan-Lafosse, Typhaine; Sehnal, David; Sen, Sanchayita; Smart, Oliver S; Varadi, Mihaly; Kleywegt, Gerard J; Velankar, Sameer

    2018-01-04

    The Protein Data Bank in Europe (PDBe, pdbe.org) is actively engaged in the deposition, annotation, remediation, enrichment and dissemination of macromolecular structure data. This paper describes new developments and improvements at PDBe addressing three challenging areas: data enrichment, data dissemination and functional reusability. New features of the PDBe Web site are discussed, including a context dependent menu providing links to raw experimental data and improved presentation of structures solved by hybrid methods. The paper also summarizes the features of the LiteMol suite, which is a set of services enabling fast and interactive 3D visualization of structures, with associated experimental maps, annotations and quality assessment information. We introduce a library of Web components which can be easily reused to port data and functionality available at PDBe to other services. We also introduce updates to the SIFTS resource which maps PDB data to other bioinformatics resources, and the PDBe REST API. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Commercial Building Energy Asset Score

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This software (Asset Scoring Tool) is designed to help building owners and managers to gain insight into the as-built efficiency of their buildings. It is a web tool where users can enter their building information and obtain an asset score report. The asset score report consists of modeled building energy use (by end use and by fuel type), building systems (envelope, lighting, heating, cooling, service hot water) evaluations, and recommended energy efficiency measures. The intended users are building owners and operators who have limited knowledge of building energy efficiency. The scoring tool collects minimum building data (~20 data entries) frommore » users and build a full-scale energy model using the inference functionalities from Facility Energy Decision System (FEDS). The scoring tool runs real-time building energy simulation using EnergyPlus and performs life-cycle cost analysis using FEDS. An API is also under development to allow the third-party applications to exchange data with the web service of the scoring tool.« less

  9. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  10. A price and performance comparison of three different storage architectures for data in cloud-based systems

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  11. Services, Perspective and Directions of the Space Physics Data Facility

    NASA Technical Reports Server (NTRS)

    McGuire, Robert E.; Bilitza, Dieter; Candey, Reine A.; Chimiak, Reine A.; Cooper, John F.; Fung, Shing F.; Harris, Bernard T.; Johnson, Rita C.; King, Joseph H.; Kovalick, Tamara; hide

    2008-01-01

    The multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project offer unique capabilities supporting science of the Heliophysics Great Observatory and that are highly complementary to other services now evolving in the international heliophysics data environment. The VSPO (Virtual Space Physics Observatory) service is an active portal to a wide rage of distributed data sources. CDAWeb (Coordinated Data Analysis Web) offers plots, listings and file downloads for current data from many missions across the boundaries of missions and instrument types. CDAWeb now includes extensive new data from STEREO and THEMIS, plus new ROCSAT IPEI data, the latest data from all four TIMED instruments and high-resolution data from all DE-2 experiments. SSCWeb, Helioweb and out 3D Animated Orbit Viewer (TIPSOD) provide position data and identification of spacecraft and ground conjunctions. OMNI Web, with its new extension to 1- and 5-minute resolution, provides interplanetary parameters at the Earth's bow shock. SPDF maintains NASA's CDF (Common Data Format) standard and a range of associated tools including format translation services. These capabilities are all now available through web services based APIs, one element in SPDF's ongoing work to enable heliophysics community development of Virtual discipline Observatories (e.g. VITMO). We will demonstrate out latest data and capabilities, review the lessons we continue to learn in what science users need and value in this class of services, and discuss out current thinking to the future role and appropriate focus of the SPDF effort in the evolving and increasingly distributed heliophysics data environment.

  12. Enabling Cross-Platform Clinical Decision Support through Web-Based Decision Support in Commercial Electronic Health Record Systems: Proposal and Evaluation of Initial Prototype Implementations

    PubMed Central

    Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku

    2013-01-01

    Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426

  13. When Will It Be …?: U.S. Naval Observatory Calendar Computers

    NASA Astrophysics Data System (ADS)

    Bartlett, Jennifer L.; Chizek Frouard, Malynda; Lesniak, Michael V.

    2016-06-01

    Sensitivity to religious calendars is increasingly expected when planning activities. Consequently, the U.S. Naval Observatory (USNO) has redesigned its on-line calendar resources to allow the computation of select religious dates for specific years via an application programming interface (API). This flexible interface returns dates in JavaScript Object Notation (JSON) that can be incorporated into third-party websites or applications. Currently, the services compute Christian, Islamic, and Jewish events.The “Dates of Ash Wednesday and Easter” service (http://aa.usno.navy.mil/data/docs/easter.php) returns the dates of these two events for years after 1582 C.E. (1582 A.D.) The method of the western Christian churches is used to determined when Easter, a moveable feast, occurs.The “Dates of Islamic New Year and Ramadan” service (http://aa.usno.navy.mil/data/docs/islamic.php) returns the approximate Gregorian dates of these two events for years after 1582 C.E. (990 A.H.) and Julian dates are computed for the years 622-1582 C.E. (1-990 A.H.). The appropriate year in the Islamic calendar (anno Hegira) is also provided. Each event begins at 6 P.M. or sunset on the preceding day. These events are computed using a tabular calendar for planning purposes; in practice, the actual event is determined by observation of the appropriate new Moon.The “First Day of Passover” service (http://aa.usno.navy.mil/data/docs/passover.php) returns the Gregorian date corresponding to Nisan 15 for years after 1582 C.E. (5342 A.M.) and Julian dates are computed for the years 360-1582 C.E. (4120-5342 A.M.). The appropriate year in the Jewish calendar (anno Mundi) is also provided. Passover begins at 6 P.M. or sunset on the preceding day.On-line documentation for using the API-enabled calendar computers, including sample calls, is available (http://aa.usno.navy.mil/data/docs/api.php). The same web page also describes how to reach the Complete Sun and Moon Data for One Day, Phases of the Moon, Solar Eclipse Computer, Day and Night Across the Earth, and Apparent Disk of a Solar System Object services using API calls.An “Introduction to Calendars” (http://aa.usno.navy.mil/faq/docs/calendars.php) provides an overview of the topic and links to additional resources.

  14. SeqHound: biological sequence and structure database as a platform for bioinformatics research

    PubMed Central

    2002-01-01

    Background SeqHound has been developed as an integrated biological sequence, taxonomy, annotation and 3-D structure database system. It provides a high-performance server platform for bioinformatics research in a locally-hosted environment. Results SeqHound is based on the National Center for Biotechnology Information data model and programming tools. It offers daily updated contents of all Entrez sequence databases in addition to 3-D structural data and information about sequence redundancies, sequence neighbours, taxonomy, complete genomes, functional annotation including Gene Ontology terms and literature links to PubMed. SeqHound is accessible via a web server through a Perl, C or C++ remote API or an optimized local API. It provides functionality necessary to retrieve specialized subsets of sequences, structures and structural domains. Sequences may be retrieved in FASTA, GenBank, ASN.1 and XML formats. Structures are available in ASN.1, XML and PDB formats. Emphasis has been placed on complete genomes, taxonomy, domain and functional annotation as well as 3-D structural functionality in the API, while fielded text indexing functionality remains under development. SeqHound also offers a streamlined WWW interface for simple web-user queries. Conclusions The system has proven useful in several published bioinformatics projects such as the BIND database and offers a cost-effective infrastructure for research. SeqHound will continue to develop and be provided as a service of the Blueprint Initiative at the Samuel Lunenfeld Research Institute. The source code and examples are available under the terms of the GNU public license at the Sourceforge site http://sourceforge.net/projects/slritools/ in the SLRI Toolkit. PMID:12401134

  15. Access Control of Web- and Java-Based Applications

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  16. Mental illness and principal physical diagnoses among Asian American and Pacific Islander users of emergency services.

    PubMed

    Chen, Huey Jen

    2005-12-01

    The stigma of mental illness is one of the factors that prevents Asian Americans/Pacific Islanders (APIs) from seeking formal mental health services. A somatic complaint is more acceptable in expressing psychiatric/emotional distress. Admission diagnoses in API emergency service users with secondary psychiatric diagnoses were identified from the 2001 National Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project (HCUP). The sample consisted of 10,623 adult APIs. The study examined the differences in the six leading principal physical admission diagnoses between API emergency service users with psychiatric diagnoses and those without psychiatric diagnoses. Several of the study findings create concern (e.g., the higher percentage of APIs with psychiatric diagnosis who were discharged against medical advice, the high percentage admitted with medication intoxication). Further study is needed to provide guidance for clinical practice.

  17. Operational Use of OGC Web Services at the Met Office

    NASA Astrophysics Data System (ADS)

    Wright, Bruce

    2010-05-01

    The Met Office has adopted the Service-Orientated Architecture paradigm to deliver services to a range of customers through Rich Internet Applications (RIAs). The approach uses standard Open Geospatial Consortium (OGC) web services to provide information to web-based applications through a range of generic data services. "Invent", the Met Office beta site, is used to showcase Met Office future plans for presenting web-based weather forecasts, product and information to the public. This currently hosts a freely accessible Weather Map Viewer, written in JavaScript, which accesses a Web Map Service (WMS), to deliver innovative web-based visualizations of weather and its potential impacts to the public. The intention is to engage the public in the development of new web-based services that more accurately meet their needs. As the service is intended for public use within the UK, it has been designed to support a user base of 5 million, the analysed level of UK web traffic reaching the Met Office's public weather information site. The required scalability has been realised through the use of multi-tier tile caching: - WMS requests are made for 256x256 tiles for fixed areas and zoom levels; - a Tile Cache, developed in house, efficiently serves tiles on demand, managing WMS request for the new tiles; - Edge Servers, externally hosted by Akamai, provide a highly scalable (UK-centric) service for pre-cached tiles, passing new requests to the Tile Cache; - the Invent Weather Map Viewer uses the Google Maps API to request tiles from Edge Servers. (We would expect to make use of the Web Map Tiling Service, when it becomes an OGC standard.) The Met Office delivers specialist commercial products to market sectors such as transport, utilities and defence, which exploit a Web Feature Service (WFS) for data relating forecasts and observations to specific geographic features, and a Web Coverage Service (WCS) for sub-selections of gridded data. These are locally rendered as maps or graphs, and combined with the WMS pre-rendered images and text, in a FLEX application, to provide sophisticated, user impact-based view of the weather. The OGC web services supporting these applications have been developed in collaboration with commercial companies. Visual Weather was originally a desktop application for forecasters, but IBL have developed it to expose the full range of forecast and observation data through standard web services (WCS and WMS). Forecasts and observations relating to specific locations and geographic features are held in an Oracle Database, and exposed as a WFS using Snowflake Software's GO-Publisher application. The Met Office has worked closely with both IBL and Snowflake Software to ensure that the web services provided strike a balance between conformance to the standards and performance in an operational environment. This has proved challenging in areas where the standards are rapidly evolving (e.g. WCS) or do not allow adequate description of the Met-Ocean domain (e.g. multiple time coordinates and parametric vertical coordinates). It has also become clear that careful selection of the features to expose, based on the way in which you expect users to query those features, in necessary in order to deliver adequate performance. These experiences are providing useful 'real-world' input in to the recently launched OGC MetOcean Domain Working Group and World Meteorological Organisation (WMO) initiatives in this area.

  18. The HydroShare Collaborative Repository for the Hydrology Community

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of, and collaboration around, "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting our approach to making this system easy to use and serving the needs of the hydrology community represented by the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI). Metadata for uploaded files is harvested automatically or captured using easy to use web user interfaces. Users are encouraged to add or create resources in HydroShare early in the data life cycle. To encourage this we allow users to share and collaborate on HydroShare resources privately among individual users or groups, entering metadata while doing the work. HydroShare also provides enhanced functionality for users through web apps that provide tools and computational capability for actions on resources. HydroShare's architecture broadly is comprised of: (1) resource storage, (2) resource exploration website, and (3) web apps for actions on resources. System components are loosely coupled and interact through APIs, which enhances robustness, as components can be upgraded and advanced relatively independently. The full power of this paradigm is the extensibility it supports. Web apps are hosted on separate servers, which may be 3rd party servers. They are registered in HydroShare using a web app resource that configures the connectivity for them to be discovered and launched directly from resource types they are associated with.

  19. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms.

    PubMed

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-10-06

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions.

  20. libChEBI: an API for accessing the ChEBI database.

    PubMed

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  1. From WSN towards WoT: Open API Scheme Based on oneM2M Platforms

    PubMed Central

    Kim, Jaeho; Choi, Sung-Chan; Ahn, Il-Yeup; Sung, Nak-Myoung; Yun, Jaeseok

    2016-01-01

    Conventional computing systems have been able to be integrated into daily objects and connected to each other due to advances in computing and network technologies, such as wireless sensor networks (WSNs), forming a global network infrastructure, called the Internet of Things (IoT). To support the interconnection and interoperability between heterogeneous IoT systems, the availability of standardized, open application programming interfaces (APIs) is one of the key features of common software platforms for IoT devices, gateways, and servers. In this paper, we present a standardized way of extending previously-existing WSNs towards IoT systems, building the world of the Web of Things (WoT). Based on the oneM2M software platforms developed in the previous project, we introduce a well-designed open API scheme and device-specific thing adaptation software (TAS) enabling WSN elements, such as a wireless sensor node, to be accessed in a standardized way on a global scale. Three pilot services are implemented (i.e., a WiFi-enabled smart flowerpot, voice-based control for ZigBee-connected home appliances, and WiFi-connected AR.Drone control) to demonstrate the practical usability of the open API scheme and TAS modules. Full details on the method of integrating WSN elements into three example systems are described at the programming code level, which is expected to help future researchers in integrating their WSN systems in IoT platforms, such as oneM2M. We hope that the flexibly-deployable, easily-reusable common open API scheme and TAS-based integration method working with the oneM2M platforms will help the conventional WSNs in diverse industries evolve into the emerging WoT solutions. PMID:27782058

  2. Research on models of Digital City geo-information sharing platform

    NASA Astrophysics Data System (ADS)

    Xu, Hanwei; Liu, Zhihui; Badawi, Rami; Liu, Haiwang

    2009-10-01

    The data related to Digital City has the property of large quantity, isomerous and multiple dimensions. In the original copy method of data sharing, the application departments can not solve the problem of data updating and data security in real-time. This paper firstly analyzes various patterns of sharing Digital City information and on this basis the author provides a new shared mechanism of GIS Services, with which the data producers provide Geographic Information Services to the application users through Web API, so as to the data producers and the data users can do their best respectively. Then the author takes the application system in supermarket management as an example to explain the correctness and effectiveness of the method provided in this paper.

  3. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  4. NASA GSFC Space Weather Center - Innovative Space Weather Dissemination: Web-Interfaces, Mobile Applications, and More

    NASA Technical Reports Server (NTRS)

    Maddox, Marlo; Zheng, Yihua; Rastaetter, Lutz; Taktakishvili, A.; Mays, M. L.; Kuznetsova, M.; Lee, Hyesook; Chulaki, Anna; Hesse, Michael; Mullinix, Richard; hide

    2012-01-01

    The NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov) is committed to providing forecasts, alerts, research, and educational support to address NASA's space weather needs - in addition to the needs of the general space weather community. We provide a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, custom space weather alerts and products, weekly summaries and reports, and most recently - video casts. There are many challenges in providing accurate descriptions of past, present, and expected space weather events - and the Space Weather Center at NASA GSFC employs several innovative solutions to provide access to a comprehensive collection of both observational data, as well as space weather model/simulation data. We'll describe the challenges we've faced with managing hundreds of data streams, running models in real-time, data storage, and data dissemination. We'll also highlight several systems and tools that are utilized by the Space Weather Center in our daily operations, all of which are available to the general community as well. These systems and services include a web-based application called the Integrated Space Weather Analysis System (iSWA http://iswa.gsfc.nasa.gov), two mobile space weather applications for both IOS and Android devices, an external API for web-service style access to data, google earth compatible data products, and a downloadable client-based visualization tool.

  5. Gee Fu: a sequence version and web-services database tool for genomic assembly, genome feature and NGS data.

    PubMed

    Ramirez-Gonzalez, Ricardo; Caccamo, Mario; MacLean, Daniel

    2011-10-01

    Scientists now use high-throughput sequencing technologies and short-read assembly methods to create draft genome assemblies in just days. Tools and pipelines like the assembler, and the workflow management environments make it easy for a non-specialist to implement complicated pipelines to produce genome assemblies and annotations very quickly. Such accessibility results in a proliferation of assemblies and associated files, often for many organisms. These assemblies get used as a working reference by lots of different workers, from a bioinformatician doing gene prediction or a bench scientist designing primers for PCR. Here we describe Gee Fu, a database tool for genomic assembly and feature data, including next-generation sequence alignments. Gee Fu is an instance of a Ruby-On-Rails web application on a feature database that provides web and console interfaces for input, visualization of feature data via AnnoJ, access to data through a web-service interface, an API for direct data access by Ruby scripts and access to feature data stored in BAM files. Gee Fu provides a platform for storing and sharing different versions of an assembly and associated features that can be accessed and updated by bench biologists and bioinformaticians in ways that are easy and useful for each. http://tinyurl.com/geefu dan.maclean@tsl.ac.uk.

  6. E-DECIDER Decision Support Gateway For Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Stough, T. M.; Parker, J. W.; Burl, M. C.; Donnellan, A.; Blom, R. G.; Pierce, M. E.; Wang, J.; Ma, Y.; Rundle, J. B.; Yoder, M. R.

    2013-12-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing capabilities for decision-making utilizing remote sensing data and modeling software in order to provide decision support for earthquake disaster management and response. E-DECIDER incorporates earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project in order to produce standards-compliant map data products to aid in decision-making following an earthquake. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools, help provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER utilizes a service-based GIS model for its cyber-infrastructure in order to produce standards-compliant products for different user types with multiple service protocols (such as KML, WMS, WFS, and WCS). The goal is to make complex GIS processing and domain-specific analysis tools more accessible to general users through software services as well as provide system sustainability through infrastructure services. The system comprises several components, which include: a GeoServer for thematic mapping and data distribution, a geospatial database for storage and spatial analysis, web service APIs, including simple-to-use REST APIs for complex GIS functionalities, and geoprocessing tools including python scripts to produce standards-compliant data products. These are then served to the E-DECIDER decision support gateway (http://e-decider.org), the E-DECIDER mobile interface, and to the Department of Homeland Security decision support middleware UICDS (Unified Incident Command and Decision Support). The E-DECIDER decision support gateway features a web interface that delivers map data products including deformation modeling results (slope change and strain magnitude) and aftershock forecasts, with remote sensing change detection results under development. These products are event triggered (from the USGS earthquake feed) and will be posted to event feeds on the E-DECIDER webpage and accessible via the mobile interface and UICDS. E-DECIDER also features a KML service that provides infrastructure information from the FEMA HAZUS database through UICDS and the mobile interface. The back-end GIS service architecture and front-end gateway components form a decision support system that is designed for ease-of-use and extensibility for end-users.

  7. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.

  8. COEUS: “semantic web in a box” for biomedical applications

    PubMed Central

    2012-01-01

    Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467

  9. COEUS: "semantic web in a box" for biomedical applications.

    PubMed

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  10. 49 CFR 195.205 - Repair, alteration and reconstruction of aboveground breakout tanks that have been in service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-refrigerated and tanks built to API Standard 650 or its predecessor Standard 12C, repair, alteration, and reconstruction must be in accordance with API Standard 653. (2) For tanks built to API Specification 12F or API..., examination, and material requirements of those respective standards. (3) For high pressure tanks built to API...

  11. 49 CFR 195.205 - Repair, alteration and reconstruction of aboveground breakout tanks that have been in service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-refrigerated and tanks built to API Standard 650 or its predecessor Standard 12C, repair, alteration, and reconstruction must be in accordance with API Standard 653. (2) For tanks built to API Specification 12F or API..., examination, and material requirements of those respective standards. (3) For high pressure tanks built to API...

  12. 49 CFR 195.205 - Repair, alteration and reconstruction of aboveground breakout tanks that have been in service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-refrigerated and tanks built to API Standard 650 or its predecessor Standard 12C, repair, alteration, and reconstruction must be in accordance with API Standard 653. (2) For tanks built to API Specification 12F or API..., examination, and material requirements of those respective standards. (3) For high pressure tanks built to API...

  13. 49 CFR 195.205 - Repair, alteration and reconstruction of aboveground breakout tanks that have been in service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-refrigerated and tanks built to API Standard 650 or its predecessor Standard 12C, repair, alteration, and reconstruction must be in accordance with API Standard 653. (2) For tanks built to API Specification 12F or API..., examination, and material requirements of those respective standards. (3) For high pressure tanks built to API...

  14. Re-Framing the World Wide Web

    ERIC Educational Resources Information Center

    Black, August

    2011-01-01

    The research presented in this dissertation studies and describes how technical standards, protocols, and application programming interfaces (APIs) shape the aesthetic, functional, and affective nature of our most dominant mode of online communication, the World Wide Web (WWW). I examine the politically charged and contentious battle over browser…

  15. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  16. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  17. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  18. The deegree framework - Spatial Data Infrastructure solution for end-users and developers

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian; Poth, Andreas

    2010-05-01

    The open source software framework deegree is a comprehensive implementa­tion of standards as defined by ISO and Open Geospatial Consortium (OGC). It has been developed with two goals in mind: provide a uniform framework for implementing Spatial Data Infrastructures (SDI) and adhering to standards as strictly as possible. Although being open source software (Lesser GNU Public Li­cense, LGPL), deegree has been developed with a business model in mind: providing the general building blocks of SDIs without license fees and offer cus­tomization, consulting and tailoring by specialized companies. The core of deegree is a comprehensive Java Application Programming Inter­face (API) offering access to spatial features, analysis, metadata and coordinate reference systems. As a library, deegree can and has been integrated as a core module inside spatial information systems. It is reference implementation for several OGC standards and based on an ISO 19107 geometry model. For end users, deegree is shipped as a web application providing easy-to-set-up components for web mapping and spatial analysis. Since 2000, deegree has been the backbone of many productive SDIs, first and foremost for governmental stakeholders (e.g. Federal Agency for Cartography and Geodesy in Germany, the Ministry of Housing, Spatial Planning and the En­vironment in the Netherlands, etc.) as well as for research and development projects as an early adoption of standards, drafts and discussion papers. Be­sides mature standards like Web Map Service, Web Feature Service and Cata­logue Services, deegree also implements rather new standards like the Sensor Observation Service, the Web Processing Service and the Web Coordinate Transformation Service (WCTS). While a robust background in standardization (knowledge and implementation) is a must for consultancy, standard-compliant services and encodings alone do not provide solutions for customers. The added value is comprised by a sophistic­ated set of client software, desktop and web environments. A focus lies on different client solutions for specific standards like the Web Pro­cessing Service and the Web Coordinate Transformation Service. On the other hand, complex geoportal solutions comprised of multiple standards and en­hanced by components for user management, security and map client function­ality show the demanding requirements of real world solutions. The XPlan-GML-standard as defined by the German spatial planing authorities is a good ex­ample of how complex real-world requirements can get. XPlan-GML is intended to provide a framework for digital spatial planning documents and requires complex Geography Markup Language (GML) features along with Symbology Encoding (SE), Filter Encoding (FE), Web Map Services (WMS), Web Feature Services (WFS). This complex in­frastructure should be used by urban and spatial planners and therefore re­quires a user-friendly graphical interface hiding the complexity of the underly­ing infrastructure. Based on challenges faced within customer projects, the importance of easy to use software components is focused. SDI solution should be build upon ISO/OGC-standards, but more important, should be user-friendly and support the users in spatial data management and analysis.

  19. A JavaScript API for the Ice Sheet System Model (ISSM) 4.11: towards an online interactive model for the cryosphere community

    NASA Astrophysics Data System (ADS)

    Larour, Eric; Cheng, Daniel; Perez, Gilberto; Quinn, Justin; Morlighem, Mathieu; Duong, Bao; Nguyen, Lan; Petrie, Kit; Harounian, Silva; Halkides, Daria; Hayes, Wayne

    2017-12-01

    Earth system models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre- and post-process results (i.e., MATLAB, Python). This results in models that are hard to use for non-specialists and are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs by developing a new JavaScript application program interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows cryosphere scientists to run ISSM on the client side of a web page within the JavaScript environment. When combined with a web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) website (http://vesl.jpl.nasa.gov, VESL(2017)).

  20. 49 CFR 195.432 - Inspection of in-service breakout tanks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... low-pressure steel aboveground breakout tanks according to API Standard 653 (incorporated by reference... breakout tanks built to API Standard 2510 according to section 6 of API 510. (d) The intervals of...

  1. 49 CFR 195.432 - Inspection of in-service breakout tanks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... low-pressure steel aboveground breakout tanks according to API Standard 653 (incorporated by reference... breakout tanks built to API Standard 2510 according to section 6 of API 510. (d) The intervals of...

  2. 49 CFR 195.432 - Inspection of in-service breakout tanks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... low-pressure steel aboveground breakout tanks according to API Standard 653 (incorporated by reference... breakout tanks built to API Standard 2510 according to section 6 of API 510. (d) The intervals of...

  3. 49 CFR 195.432 - Inspection of in-service breakout tanks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... low-pressure steel aboveground breakout tanks according to API Standard 653 (incorporated by reference... breakout tanks built to API Standard 2510 according to section 6 of API 510. (d) The intervals of...

  4. 49 CFR 195.432 - Inspection of in-service breakout tanks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... low-pressure steel aboveground breakout tanks according to API Standard 653 (incorporated by reference... breakout tanks built to API Standard 2510 according to section 6 of API 510. (d) The intervals of...

  5. Vcs.js - Visualization Control System for the Web

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Lipsa, D.; Doutriaux, C.; Beezley, J. D.; Williams, D. N.; Fries, S.; Harris, M. B.

    2016-12-01

    VCS is a general purpose visualization library, optimized for climate data, which is part of the UV-CDAT system. It provides a Python API for drawing 2D plots such as lineplots, scatter plots, Taylor diagrams, data colored by scalar values, vector glyphs, isocontours and map projections. VCS is based on the VTK library. Vcs.js is the corresponding JavaScript API, designed to be as close as possible to the original VCS Python API and to provide similar functionality for the Web. Vcs.js includes additional functionality when compared with VCS. This additional API is used to introspect data files available on the server and variables available in a data file. Vcs.js can display plots in the browser window. It always works with a server that reads a data file, extracts variables from the file and subsets the data. From this point, two alternate paths are possible. First the system can render the data on the server using VCS producing an image which is send to the browser to be displayed. This path works for for all plot types and produces a reference image identical with the images produced by VCS. This path uses the VTK-Web library. As an optimization, usable in certain conditions, a second path is possible. Data is packed, and sent to the browser which uses a JavaScript plotting library, such as plotly, to display the data. Plots that work well in the browser are line-plots, scatter-plots for any data and many other plot types for small data and supported grid types. As web technology matures, more plots could be supported for rendering in the browser. Rendering can be done either on the client or on the server and we expect that the best place to render will change depending on the available web technology, data transfer costs, server management costs and value provided to users. We intend to provide a flexible solution that allows for both client and server side rendering and a meaningful way to choose between the two. We provide a web-based user interface called vCdat which uses Vcs.js as its visualization library. Our paper will discuss the principles guiding our design choices for Vcs.js, present our design in detail and show a sample usage of the library.

  6. DEVELOPING THE NATIONAL GEOTHERMAL DATA SYSTEM ADOPTION OF CKAN FOR DOMESTIC & INTERNATIONAL DATA DEPLOYMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Ryan J.; Kuhmuench, Christoph; Richard, Stephen M.

    2013-03-01

    The National Geothermal Data System (NGDS) De- sign and Testing Team is developing NGDS software currently referred to as the “NGDS Node-In-A-Box”. The software targets organizations or individuals who wish to host at least one of the following: • an online repository containing resources for the NGDS; • an online site for creating metadata to register re- sources with the NGDS • NDGS-conformant Web APIs that enable access to NGDS data (e.g., WMS, WFS, WCS); • NDGS-conformant Web APIs that support dis- covery of NGDS resources via catalog service (e.g. CSW) • a web site that supports discovery and under-more » standing of NGDS resources A number of different frameworks for development of this online application were reviewed. The NGDS Design and Testing Team determined to use CKAN (http://ckan.org/), because it provides the closest match between out of the box functionality and NGDS node-in-a-box requirements. To achieve the NGDS vision and goals, this software development project has been inititated to provide NGDS data consumers with a highly functional inter- face to access the system, and to ease the burden on data providers who wish to publish data in the sys- tem. It is important to note that this software package constitutes a reference implementation. The NGDS software is based on open standards, which means other server software can make resources available, and other client applications can utilize NGDS data. A number of international organizations have ex- pressed interest in the NGDS approach to data access. The CKAN node implementation can provide a sim- ple path for deploying this technology in other set- tings.« less

  7. Satellite Estimation of Fractional Cover in Several California Specialty Crops

    NASA Technical Reports Server (NTRS)

    Johnson, Lee; Cahn, Michael; Rosevelt, Carolyn; Guzman, Alberto; Farrara, Barry; Melton, Forrest S.

    2016-01-01

    Past research in California and elsewhere has revealed strong relationships between satellite NDVI, photosynthetically active vegetation fraction (Fc), and crop evapotranspiration (ETc). Estimation of ETc can support efficiency of irrigation practice, which enhances water security and may mitigate nitrate leaching. The U.C. Cooperative Extension previously developed the CropManage (CM) web application for evaluation of crop water requirement and irrigation scheduling for several high-value specialty crops. CM currently uses empirical equations to predict daily Fc as a function of crop type, planting date and expected harvest date. The Fc prediction is transformed to fraction of reference ET and combined with reference data from the California Irrigation Management Information System to estimate daily ETc. In the current study, atmospherically-corrected Landsat NDVI data were compared with in-situ Fc estimates on several crops in the Salinas Valley during 2011-2014. The satellite data were observed on day of ground collection or were linearly interpolated across no more than an 8-day revisit period. Results will be presented for lettuce, spinach, celery, broccoli, cauliflower, cabbage, peppers, and strawberry. An application programming interface (API) allows CM and other clients to automatically retrieve NDVI and associated data from NASA's Satellite Irrigation Management Support (SIMS) web service. The SIMS API allows for queries both by individual points or user-defined polygons, and provides data for individual days or annual timeseries. Updates to the CM web app will convert these NDVI data to Fc on a crop-specific basis. The satellite observations are expected to play a support role in Salinas Valley, and may eventually serve as a primary data source as CM is extended to crop systems or regions where Fc is less predictable.

  8. Satellite Estimation of Fractional Cover in Several California Specialty Crops

    NASA Astrophysics Data System (ADS)

    Johnson, L.; Cahn, M.; Rosevelt, C.; Guzman, A.; Lockhart, T.; Farrara, B.; Melton, F. S.

    2016-12-01

    Past research in California and elsewhere has revealed strong relationships between satellite NDVI, photosynthetically active vegetation fraction (Fc), and crop evapotranspiration (ETc). Estimation of ETc can support efficiency of irrigation practice, which enhances water security and may mitigate nitrate leaching. The U.C. Cooperative Extension previously developed the CropManage (CM) web application for evaluation of crop water requirement and irrigation scheduling for several high-value specialty crops. CM currently uses empirical equations to predict daily Fc as a function of crop type, planting date and expected harvest date. The Fc prediction is transformed to fraction of reference ET and combined with reference data from the California Irrigation Management Information System to estimate daily ETc. In the current study, atmospherically-corrected Landsat NDVI data were compared with in-situ Fc estimates on several crops in the Salinas Valley during 2011-2014. The satellite data were observed on day of ground collection or were linearly interpolated across no more than an 8-day revisit period. Results will be presented for lettuce, spinach, celery, broccoli, cauliflower, cabbage, peppers, and strawberry. An application programming interface (API) allows CM and other clients to automatically retrieve NDVI and associated data from NASA's Satellite Irrigation Management Support (SIMS) web service. The SIMS API allows for queries both by individual points or user-defined polygons, and provides data for individual days or annual timeseries. Updates to the CM web app will convert these NDVI data to Fc on a crop-specific basis. The satellite observations are expected to play a support role in Salinas Valley, and may eventually serve as a primary data source as CM is extended to crop systems or regions where Fc is less predictable.

  9. A Fine-Grained API Link Prediction Approach Supporting CMDA Mashup Recommendation

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Bao, Q.; Lee, T. J.; Ramachandran, R.; Lee, S.; Pan, L.; Gatlin, P. N.; Maskey, M.

    2017-12-01

    Service (API) discovery and recommendation is key to the wide spread of service oriented architecture and service oriented software engineering. Service recommendation typically relies on service linkage prediction calculated by the semantic distances (or similarities) among services based on their collection of inherent attributes. Given a specific context (mashup goal), however, different attributes may contribute differently to a service linkage. In this work, instead of training a model for all attributes as a whole, a novel approach is presented to simultaneously train separate models for individual attributes. Our contributions are summarized in three-fold. First is that we have developed a scalable attribute-level data model, featuring scalability and extensibility. We have extended Multiplicative Attribute Graph (MAG) model to represent node profiles featuring rich categorical attributes, while relaxing its constraint of requiring a priori knowledge of predefined attributes. LDA is leveraged to dynamically identify attributes based on attribute modeling, and multiple Gaussian fit is applied to find global optimal values. The second contribution is that we have seamlessly integrated the latent relationships between API attributes as well as observed network structure based on historical API usage data. Such a layered information model enables us to predict the probability of a link between two APIs based on their attribute link affinities carrying a variety of information including meta data, semantic data, historical usage data, as well as crowdsourcing user comments and annotations. The third contribution is that we have developed a finegrained context-aware mashup-API recommendation technique. On top of individual models trained for separate attributes, a dedicated layer is trained to represent the latent attribute distribution regarding mashup purpose, i.e., sensitivity of attributes to context. Thus, given the description of an intended mashup, the attributes sensitive to the goal will be identified, and corresponding attribute models will be exploited to compute the possibility of API linkages under the context. Such a layered model increases search accuracy.

  10. PathJam: a new service for integrating biological pathway information.

    PubMed

    Glez-Peña, Daniel; Reboiro-Jato, Miguel; Domínguez, Rubén; Gómez-López, Gonzalo; Pisano, David G; Fdez-Riverola, Florentino

    2010-10-28

    Biological pathways are crucial to much of the scientific research today including the study of specific biological processes related with human diseases. PathJam is a new comprehensive and freely accessible web-server application integrating scattered human pathway annotation from several public sources. The tool has been designed for both (i) being intuitive for wet-lab users providing statistical enrichment analysis of pathway annotations and (ii) giving support to the development of new integrative pathway applications. PathJam’s unique features and advantages include interactive graphs linking pathways and genes of interest, downloadable results in fully compatible formats, GSEA compatible output files and a standardized RESTful API.

  11. Wilber 3: A Python-Django Web Application For Acquiring Large-scale Event-oriented Seismic Data

    NASA Astrophysics Data System (ADS)

    Newman, R. L.; Clark, A.; Trabant, C. M.; Karstens, R.; Hutko, A. R.; Casey, R. E.; Ahern, T. K.

    2013-12-01

    Since 2001, the IRIS Data Management Center (DMC) WILBER II system has provided a convenient web-based interface for locating seismic data related to a particular event, and requesting a subset of that data for download. Since its launch, both the scale of available data and the technology of web-based applications have developed significantly. Wilber 3 is a ground-up redesign that leverages a number of public and open-source projects to provide an event-oriented data request interface with a high level of interactivity and scalability for multiple data types. Wilber 3 uses the IRIS/Federation of Digital Seismic Networks (FDSN) web services for event data, metadata, and time-series data. Combining a carefully optimized Google Map with the highly scalable SlickGrid data API, the Wilber 3 client-side interface can load tens of thousands of events or networks/stations in a single request, and provide instantly responsive browsing, sorting, and filtering of event and meta data in the web browser, without further reliance on the data service. The server-side of Wilber 3 is a Python-Django application, one of over a dozen developed in the last year at IRIS, whose common framework, components, and administrative overhead represent a massive savings in developer resources. Requests for assembled datasets, which may include thousands of data channels and gigabytes of data, are queued and executed using the Celery distributed Python task scheduler, giving Wilber 3 the ability to operate in parallel across a large number of nodes.

  12. Interfaces to PeptideAtlas: a case study of standard data access systems

    PubMed Central

    Handcock, Jeremy; Robinson, Thomas; Deutsch, Eric W.; Boyle, John

    2012-01-01

    Access to public data sets is important to the scientific community as a resource to develop new experiments or validate new data. Projects such as the PeptideAtlas, Ensembl and The Cancer Genome Atlas (TCGA) offer both access to public data and a repository to share their own data. Access to these data sets is often provided through a web page form and a web service API. Access technologies based on web protocols (e.g. http) have been in use for over a decade and are widely adopted across the industry for a variety of functions (e.g. search, commercial transactions, and social media). Each architecture adapts these technologies to provide users with tools to access and share data. Both commonly used web service technologies (e.g. REST and SOAP), and custom-built solutions over HTTP are utilized in providing access to research data. Providing multiple access points ensures that the community can access the data in the simplest and most effective manner for their particular needs. This article examines three common access mechanisms for web accessible data: BioMart, caBIG, and Google Data Sources. These are illustrated by implementing each over the PeptideAtlas repository and reviewed for their suitability based on specific usages common to research. BioMart, Google Data Sources, and caBIG are each suitable for certain uses. The tradeoffs made in the development of the technology are dependent on the uses each was designed for (e.g. security versus speed). This means that an understanding of specific requirements and tradeoffs is necessary before selecting the access technology. PMID:22941959

  13. Dynamic Policy-Driven Quality of Service in Service-Oriented Information Management Systems

    DTIC Science & Technology

    2011-01-01

    both DiffServ and IntServ net- work QoS mechanisms. Wang et al [48] provide middleware APIs to shield applications from directly interacting with...complex network QoS mechanism APIs . Middleware frameworks transparently converted the specified application QoS requirements into low- er-level network...QoS mechanism APIs and provided network QoS assurances. Deployment-time resource allocation. Other prior work has focused on deploying ap- plications

  14. Using Web Speech Technology with Language Learning Applications

    ERIC Educational Resources Information Center

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  15. Fast Deployment on the Cloud of Integrated Postgres, API and a Jupyter Notebook for Geospatial Collaboration

    NASA Astrophysics Data System (ADS)

    Fatland, R.; Tan, A.; Arendt, A. A.

    2016-12-01

    We describe a Python-based implementation of a PostgreSQL database accessed through an Application Programming Interface (API) hosted on the Amazon Web Services public cloud. The data is geospatial and concerns hydrological model results in the glaciated catchment basins of southcentral and southeast Alaska. This implementation, however, is intended to be generalized to other forms of geophysical data, particularly data that is intended to be shared across a collaborative team or publicly. An example (moderate-size) dataset is provided together with the code base and a complete installation tutorial on GitHub. An enthusiastic scientist with some familiarity with software installation can replicate the example system in two hours. This installation includes database, API, a test Client and a supporting Jupyter Notebook, specifically oriented towards Python 3 and markup text to comprise an executable paper. The installation 'on the cloud' often engenders discussion and consideration of cloud cost and safety. By treating the process as somewhat "cookbook" we hope to first demonstrate the feasibility of the proposition. A discussion of cost and data security is provided in this presentation and in the accompanying tutorial/documentation. This geospatial data system case study is part of a larger effort at the University of Washington to enable research teams to take advantage of the public cloud to meet challenges in data management and analysis.

  16. Dynamic Policy Evaluation for Containing Network Attacks (DEFCN)

    DTIC Science & Technology

    2005-03-01

    API reads policy information from the target users ".ssh" directory and applies those policies to determine whether remote login is allowed to a...types of events that can be controlled by the threshold detectors and reported by the GAA-API include the number of failed login attempts within a given...other uses of the system. Emerald architecture [2] includes a data- collection module integrated with Apache Web server. The module extracts the request

  17. Internet SCADA Utilizing API's as Data Source

    NASA Astrophysics Data System (ADS)

    Robles, Rosslin John; Kim, Haeng-Kon; Kim, Tai-Hoon

    An Application programming interface or API is an interface implemented by a software program that enables it to interact with other software. Many companies provide free API services which can be utilized in Control Systems. SCADA is an example of a control system and it is a system that collects data from various sensors at a factory, plant or in other remote locations and then sends this data to a central computer which then manages and controls the data. In this paper, we designed a scheme for Weather Condition in Internet SCADA Environment utilizing data from external API services. The scheme was designed to double check the weather information in SCADA.

  18. The Proteins API: accessing key integrated protein and genome information

    PubMed Central

    Antunes, Ricardo; Alpi, Emanuele; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd

    2017-01-01

    Abstract The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to ‘talk’ to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). PMID:28383659

  19. The Proteins API: accessing key integrated protein and genome information.

    PubMed

    Nightingale, Andrew; Antunes, Ricardo; Alpi, Emanuele; Bursteinas, Borisas; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd; Martin, Maria

    2017-07-03

    The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to 'talk' to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. deepTools2: a next generation web server for deep-sequencing data analysis.

    PubMed

    Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas

    2016-07-08

    We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. SCEAPI: A unified Restful Web API for High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  2. Evaluating a NoSQL Alternative for Chilean Virtual Observatory Services

    NASA Astrophysics Data System (ADS)

    Antognini, J.; Araya, M.; Solar, M.; Valenzuela, C.; Lira, F.

    2015-09-01

    Currently, the standards and protocols for data access in the Virtual Observatory architecture (DAL) are generally implemented with relational databases based on SQL. In particular, the Astronomical Data Query Language (ADQL), language used by IVOA to represent queries to VO services, was created to satisfy the different data access protocols, such as Simple Cone Search. ADQL is based in SQL92, and has extra functionality implemented using PgSphere. An emergent alternative to SQL are the so called NoSQL databases, which can be classified in several categories such as Column, Document, Key-Value, Graph, Object, etc.; each one recommended for different scenarios. Within their notable characteristics we can find: schema-free, easy replication support, simple API, Big Data, etc. The Chilean Virtual Observatory (ChiVO) is developing a functional prototype based on the IVOA architecture, with the following relevant factors: Performance, Scalability, Flexibility, Complexity, and Functionality. Currently, it's very difficult to compare these factors, due to a lack of alternatives. The objective of this paper is to compare NoSQL alternatives with SQL through the implementation of a Web API REST that satisfies ChiVO's needs: a SESAME-style name resolver for the data from ALMA. Therefore, we propose a test scenario by configuring a NoSQL database with data from different sources and evaluating the feasibility of creating a Simple Cone Search service and its performance. This comparison will allow to pave the way for the application of Big Data databases in the Virtual Observatory.

  3. A Responsive Client for Distributed Visualization

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Jensen, P. A.; Erlebacher, G.; Yuen, D. A.; Momsen, A. R.

    2006-12-01

    As grids, web services and distributed computing continue to gain popularity in the scientific community, demand for virtual laboratories likewise increases. Today organizations such as the Virtual Laboratory for Earth and Planetary Sciences (VLab) are dedicated to developing web-based portals to perform various simulations remotely while abstracting away details of the underlying computation. Two of the biggest challenges in portal- based computing are fast visualization and smooth interrogation without over taxing clients resources. In response to this challenge, we have expanded on our previous data storage strategy and thick client visualization scheme [1] to develop a client-centric distributed application that utilizes remote visualization of large datasets and makes use of the local graphics processor for improved interactivity. Rather than waste precious client resources for visualization, a combination of 3D graphics and 2D server bitmaps are used to simulate the look and feel of local rendering. Java Web Start and Java Bindings for OpenGL enable install-on- demand functionality as well as low level access to client graphics for all platforms. Powerful visualization services based on VTK and auto-generated by the WATT compiler [2] are accessible through a standard web API. Data is permanently stored on compute nodes while separate visualization nodes fetch data requested by clients, caching it locally to prevent unnecessary transfers. We will demonstrate application capabilities in the context of simulated charge density visualization within the VLab portal. In addition, we will address generalizations of our application to interact with a wider number of WATT services and performance bottlenecks. [1] Ananthuni, R., Karki, B.B., Bollig, E.F., da Silva, C.R.S., Erlebacher, G., "A Web-Based Visualization and Reposition Scheme for Scientific Data," In Press, Proceedings of the 2006 International Conference on Modeling Simulation and Visualization Methods (MSV'06) (2006). [2] Jensen, P.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kigelman, D.G., Shukh, E.A., Automated Generation of Web Services for Visualization Toolkits, Eos Trans. AGU, 86(52), Fall Meet. Suppl., Abstract IN42A-06, 2005.

  4. The eNanoMapper database for nanomaterial safety information

    PubMed Central

    Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413

  5. The eNanoMapper database for nanomaterial safety information.

    PubMed

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).

  6. Digital Earth Watch (DEW): How Mobile Apps Are Paving The Way Towards A Federated Web-Services Architecture For Citizen Science

    NASA Astrophysics Data System (ADS)

    Carrera, F.; Schloss, A. L.; Guerin, S.; Beaudry, J.; Pickle, J.

    2011-12-01

    Dozens of web-based initiatives allow citizens to provide information to programs that monitor the health of our environment. A concerned citizen can participate on-line as a weather "spotter", provide important phenological information to national databases, update bird counts in the area, or record the freezing of ponds, and much more. Many of these programs are developing mobile apps as companion tools to their web sites. Our group was involved in the development of one such companion app as an adjunct to the Picture Post project web site. Digital Earth Watch (DEW) and the Picture Post network support environmental monitoring through repeat digital photography and satellite imagery. A Picture Post is an eight-sided platform on a stand-alone post for taking a panoramic series of photographs. By taking pictures on a regular basis at Picture Post sites and by sharing these pictures on the program's web site (housed at the University of New Hampshire), citizen scientists are creating a photographic library of change-over-time in their local area and contributing to national monitoring programs. Our DEW Android application simplifies participation by allowing users to upload pictures instantly from their smart phone. The app also removes the constraint of the physical picture post, by allowing users to create a virtual post anywhere in the world. Posts have been set up to monitor trails, forests, water, wetlands, gardens and landscapes. The app uses the phone's GPS to position the virtual post in its geographic location and guides the user through the orientations thanks to the internal accelerometers and compass. To aid in the before-and-after comparison of images taken from the same orientation, the DEW app displays an "onionskin" of the prior image overlayed onto the camera viewfinder. With the transparent onionskin as a guide, the user can align the images more accurately, thus allowing differences between pictures to be detectable and measurable. The app interacts with the UNH server via APIs (Application Programming Interfaces) that were created to allow bi-directional machine-to-machine interaction between the mobile device and the web site. Thus, the principal functions that a user can perform on the web site, such as finding post sites on a map and viewing and adding picture sets, are available on the smartphone. The development of the APIs makes it now possible not only to communicate with our own mobile app, but, more importantly, it opens the door for other computer systems to directly interact with our server. Our ongoing discussions with the National Phenology Network and Project Budburst, have highlighted the potential (and perhaps the need) for the creation of a distributed web-service architecture whereby each national program exposes its key functionalities not only to their own mobile phone apps, but also to other organizations, in a federated system of servers, all supporting citizen-based digital earth watch programs.

  7. The Ruby UCSC API: accessing the UCSC genome database using Ruby.

    PubMed

    Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro

    2012-09-21

    The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.

  8. The Ruby UCSC API: accessing the UCSC genome database using Ruby

    PubMed Central

    2012-01-01

    Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508

  9. Physician visits and preventive care among Asian American and Pacific Islander long-term survivors of colorectal cancer, USA, 1996-2006.

    PubMed

    Steele, C Brooke; Townsend, Julie S; Tai, Eric; Thomas, Cheryll C

    2014-03-01

    Published literature on receipt of preventive healthcare services among Asian American and Pacific Islander (API) cancer survivors is scarce. We describe patterns in receipt of preventive services among API long-term colorectal cancer (CRC) survivors. Surveillance, Epidemiology, and End Results registry-Medicare data were used to identify 9,737 API and white patients who were diagnosed with CRC during 1996-2000 and who survived 5 or more years beyond their diagnoses. We examined receipt of vaccines, mammography (females), bone densitometry (females), and cholesterol screening among the survivors and how the physician specialties they visited for follow-up care correlated to services received. APIs were less likely than whites to receive mammography (52.0 vs. 69.3 %, respectively; P < 0.0001) but more likely to receive influenza vaccine, cholesterol screening, and bone densitometry. These findings remained significant in our multivariable model, except for receipt of bone densitometry. APIs visited PCPs only and both PCPs and oncologists more frequently than whites (P < 0.0001). Women who visited both PCPs and oncologists compared with PCPs only were more likely to receive mammography (odds ratio = 1.40; 95 % confidence interval, 1.05-1.86). Visits to both PCPs and oncologists were associated with increased use of mammography. Although API survivors visited these specialties more frequently than white survivors, API women may need culturally appropriate outreach to increase their use of this test. Long-term cancer survivors need to be aware of recommended preventive healthcare services, as well as who will manage their primary care and cancer surveillance follow-up.

  10. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources

    PubMed Central

    Waagmeester, Andra; Pico, Alexander R.

    2016-01-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  11. A browser-based event display for the CMS Experiment at the LHC using WebGL

    NASA Astrophysics Data System (ADS)

    McCauley, T.

    2017-10-01

    Modern web browsers are powerful and sophisticated applications that support an ever-wider range of uses. One such use is rendering high-quality, GPU-accelerated, interactive 2D and 3D graphics in an HTML canvas. This can be done via WebGL, a JavaScript API based on OpenGL ES. Applications delivered via the browser have several distinct benefits for the developer and user. For example, they can be implemented using well-known and well-developed technologies, while distribution and use via a browser allows for rapid prototyping and deployment and ease of installation. In addition, delivery of applications via the browser allows for easy use on mobile, touch-enabled devices such as phones and tablets. iSpy WebGL is an application for visualization of events detected and reconstructed by the CMS Experiment at the Large Hadron Collider at CERN. The first event display developed for an LHC experiment to use WebGL, iSpy WebGL is a client-side application written in JavaScript, HTML, and CSS and uses the WebGL API three.js. iSpy WebGL is used for monitoring of CMS detector performance, for production of images and animations of CMS collisions events for the public, as a virtual reality application using Google Cardboard, and asa tool available for public education and outreach such as in the CERN Open Data Portal and the CMS masterclasses. We describe here its design, development, and usage as well as future plans.

  12. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    PubMed

    Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R

    2016-06-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.

  13. Feature Positioning on Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Tsai, V. J. D.; Chang, C.-T.

    2012-07-01

    Location-based services (LBS) on web-based maps and images have come into real-time since Google launched its Street View imaging services in 2007. This research employs Google Maps API and Web Service, GAE for JAVA, AJAX, Proj4js, CSS and HTML in developing an internet platform for accessing the orientation parameters of Google Street View (GSV) panoramas in order to determine the three dimensional position of interest features that appear on two overlapping panoramas by geometric intersection. A pair of GSV panoramas was examined using known points located on the Library Building of National Chung Hsing University (NCHU) with the root-mean-squared errors of ±0.522m, ±1.230m, and ±5.779m for intersection and ±0.142m, ±1.558m, and ±5.733m for resection in X, Y, and h (elevation), respectively. Potential error sources in GSV positioning were analyzed and illustrated that the errors in Google provided GSV positional parameters dominate the errors in geometric intersection. The developed system is suitable for data collection in establishing LBS applications integrated with Google Maps and Google Earth in traffic sign and infrastructure inventory by adding automatic extraction and matching techniques for points of interest (POI) from GSV panoramas.

  14. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  15. The Virtual Space Physics Observatory: Quick Access to Data and Tools

    NASA Technical Reports Server (NTRS)

    Cornwell, Carl; Roberts, D. Aaron; McGuire, Robert E.

    2006-01-01

    The Virtual Space Physics Observatory (VSPO; see http://vspo.gsfc.nasa.gov) has grown to provide a way to find and access about 375 data products and services from over 100 spacecraft/observatories in space and solar physics. The datasets are mainly chosen to be the most requested, and include most of the publicly available data products from operating NASA Heliophysics spacecraft as well as from solar observatories measuring across the frequency spectrum. Service links include a "quick orbits" page that uses SSCWeb Web Services to provide a rapid answer to questions such as "What spacecraft were in orbit in July 1992?" and "Where were Geotail, Cluster, and Polar on 2 June 2001?" These queries are linked back to the data search page. The VSPO interface provides many ways of looking for data based on terms used in a registry of resources using the SPASE Data Model that will be the standard for Heliophysics Virtual Observatories. VSPO itself is accessible via an API that allows other applications to use it as a Web Service; this has been implemented in one instance using the ViSBARD visualization program. The VSPO will become part of the Space Physics Data Facility, and will continue to expand its access to data. A challenge for all VOs will be to provide uniform access to data at the variable level, and we will be addressing this question in a number of ways.

  16. Multi-Robot Systems in Military Domains (Les Systemes Multi-Robots Dans les Domaines Militaires)

    DTIC Science & Technology

    2008-12-01

    to allow him to react quickly to improve his personal safety , it is mandatory to shorten the current very long delay needed for the human operator to...Hard RT tasks 2 OS / API Process monitoring 3 H / API Flexible communication medium 4 H / API Networking capabilities 5 H / API Safety 6 API...also be considered between high level services and legacy systems. 4) This is the one of the basic requirement for CoRoDe. 5) Safety : CRC, Timeouts

  17. blend4php: a PHP API for galaxy

    PubMed Central

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P.

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy’s RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4php. Database URL: https://github.com/galaxyproject/blend4php PMID:28077564

  18. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    NASA Astrophysics Data System (ADS)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  19. The neXtProt peptide uniqueness checker: a tool for the proteomics community.

    PubMed

    Schaeffer, Mathieu; Gateau, Alain; Teixeira, Daniel; Michel, Pierre-André; Zahn-Zabal, Monique; Lane, Lydie

    2017-11-01

    The neXtProt peptide uniqueness checker allows scientists to define which peptides can be used to validate the existence of human proteins, i.e. map uniquely versus multiply to human protein sequences taking into account isobaric substitutions, alternative splicing and single amino acid variants. The pepx program is available at https://github.com/calipho-sib/pepx and can be launched from the command line or through a cgi web interface. Indexing requires a sequence file in FASTA format. The peptide uniqueness checker tool is freely available on the web at https://www.nextprot.org/tools/peptide-uniqueness-checker and from the neXtProt API at https://api.nextprot.org/. lydie.lane@sib.swiss. © The Author(s) 2017. Published by Oxford University Press.

  20. Factor structure of the autonomy preference index in people with severe mental illness.

    PubMed

    Bonfils, Kelsey A; Adams, Erin L; Mueser, Kim T; Wright-Berryman, Jennifer L; Salyers, Michelle P

    2015-08-30

    People vary in the amount of control they want to exercise over decisions about their healthcare. Given the importance of patient-centered care, accurate measurement of these autonomy preferences is critical. This study aimed to assess the factor structure of the Autonomy Preference Index (API), used widely in general healthcare, in individuals with severe mental illness. Data came from two studies of people with severe mental illness (N=293) who were receiving mental health and/or primary care/integrated care services. Autonomy preferences were assessed with the API regarding both psychiatric and primary care services. Confirmatory factor analysis was used to evaluate fit of the hypothesized two-factor structure of the API (decision-making autonomy and information-seeking autonomy). Results indicated the hypothesized structure for the API did not adequately fit the data for either psychiatric or primary care services. Three problematic items were dropped, resulting in adequate fit for both types of treatment. These results suggest that with relatively minor modifications the API has an acceptable factor structure when asking people with severe mental illness about their preferences to be involved in decision-making. The modified API has clinical and research utility for this population in the burgeoning field of autonomy in patient-centered healthcare. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. A Driving Cycle Detection Approach Using Map Service API

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Gonder, Jeffrey D

    Following advancements in smartphone and portable global positioning system (GPS) data collection, wearable GPS data have realized extensive use in transportation surveys and studies. The task of detecting driving cycles (driving or car-mode trajectory segments) from wearable GPS data has been the subject of much research. Specifically, distinguishing driving cycles from other motorized trips (such as taking a bus) is the main research problem in this paper. Many mode detection methods only focus on raw GPS speed data while some studies apply additional information, such as geographic information system (GIS) data, to obtain better detection performance. Procuring and maintaining dedicatedmore » road GIS data are costly and not trivial, whereas the technical maturity and broad use of map service application program interface (API) queries offers opportunities for mode detection tasks. The proposed driving cycle detection method takes advantage of map service APIs to obtain high-quality car-mode API route information and uses a trajectory segmentation algorithm to find the best-matched API route. The car-mode API route data combined with the actual route information, including the actual mode information, are used to train a logistic regression machine learning model, which estimates car modes and non-car modes with probability rates. The experimental results show promise for the proposed method's ability to detect vehicle mode accurately.« less

  2. KAGLVis - On-line 3D Visualisation of Earth-observing-satellite Data

    NASA Astrophysics Data System (ADS)

    Szuba, Marek; Ameri, Parinaz; Grabowski, Udo; Maatouki, Ahmad; Meyer, Jörg

    2015-04-01

    One of the goals of the Large-Scale Data Management and Analysis project is to provide a high-performance framework facilitating management of data acquired by Earth-observing satellites such as Envisat. On the client-facing facet of this framework, we strive to provide visualisation and basic analysis tool which could be used by scientists with minimal to no knowledge of the underlying infrastructure. Our tool, KAGLVis, is a JavaScript client-server Web application which leverages modern Web technologies to provide three-dimensional visualisation of satellite observables on a wide range of client systems. It takes advantage of the WebGL API to employ locally available GPU power for 3D rendering; this approach has been demonstrated to perform well even on relatively weak hardware such as integrated graphics chipsets found in modern laptop computers and with some user-interface tuning could even be usable on embedded devices such as smartphones or tablets. Data is fetched from the database back-end using a ReST API and cached locally, both in memory and using HTML5 Web Storage, to minimise network use. Computations, calculation of cloud altitude from cloud-index measurements for instance, can depending on configuration be performed on either the client or the server side. Keywords: satellite data, Envisat, visualisation, 3D graphics, Web application, WebGL, MEAN stack.

  3. An Auto-Configuration System for the GMSEC Architecture and API

    NASA Technical Reports Server (NTRS)

    Moholt, Joseph; Mayorga, Arturo

    2007-01-01

    A viewgraph presentation on an automated configuration concept for The Goddard Mission Services Evolution Center (GMSEC) architecture and Application Program Interface (API) is shown. The topics include: 1) The Goddard Mission Services Evolution Center (GMSEC); 2) Automated Configuration Concept; 3) Implementation Approach; and 4) Key Components and Benefits.

  4. Abstracting application deployment on Cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.

    2017-10-01

    Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.

  5. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  6. Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Evan; Bourassa, Norm; Rainer, Leo

    2016-04-22

    A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.

  7. Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick

    2017-04-01

    Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.

  8. The Geo Data Portal an Example Physical and Application Architecture Demonstrating the Power of the "Cloud" Concept.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Booth, N.; Walker, J.; Kunicki, T.

    2012-12-01

    The U.S. Geological Survey Center for Integrated Data Analytics (CIDA), in holding with the President's Digital Government Strategy and the Department of Interior's IT Transformation initiative, has evolved its data center and application architecture toward the "cloud" paradigm. In this case, "cloud" refers to a goal of developing services that may be distributed to infrastructure anywhere on the Internet. This transition has taken place across the entire data management spectrum from data center location to physical hardware configuration to software design and implementation. In CIDA's case, physical hardware resides in Madison at the Wisconsin Water Science Center, in South Dakota at the Earth Resources Observation and Science Center (EROS), and in the near future at a DOI approved commercial vendor. Tasks normally conducted on desktop-based GIS software with local copies of data in proprietary formats are now done using browser-based interfaces to web processing services drawing on a network of standard data-source web services. Organizations are gaining economies of scale through data center consolidation and the creation of private cloud services as well as taking advantage of the commoditization of data processing services. Leveraging open standards for data and data management take advantage of this commoditization and provide the means to reliably build distributed service based systems. This presentation will use CIDA's experience as an illustration of the benefits and hurdles of moving to the cloud. Replicating, reformatting, and processing large data sets, such as downscaled climate projections, traditionally present a substantial challenge to environmental science researchers who need access to data subsets and derived products. The USGS Geo Data Portal (GDP) project uses cloud concepts to help earth system scientists' access subsets, spatial summaries, and derivatives of commonly needed very large data. The GDP project has developed a reusable architecture and advanced processing services that currently accesses archives hosted at Lawrence Livermore National Lab, Oregon State University, the University Corporation for Atmospheric Research, and the U.S. Geological Survey, among others. Several examples of how the GDP project uses cloud concepts will be highlighted in this presentation: 1) The high bandwidth network connectivity of large data centers reduces the need for data replication and storage local to processing services. 2) Standard data serving web services, like OPeNDAP, Web Coverage Services, and Web Feature Services allow GDP services to remotely access custom subsets of data in a variety of formats, further reducing the need for data replication and reformatting. 3) The GDP services use standard web service APIs to allow browser-based user interfaces to run complex and compute-intensive processes for users from any computer with an Internet connection. The combination of physical infrastructure and application architecture implemented for the Geo Data Portal project offer an operational example of how distributed data and processing on the cloud can be used to aid earth system science.

  9. Mobile service for open data visualization on geo-based images

    NASA Astrophysics Data System (ADS)

    Lee, Kiwon; Kim, Kwangseob; Kang, Sanggoo

    2015-12-01

    Since the early 2010s, governments in most countries have adopted and promoted open data policy and open data platform. Korea are in the same situation, and government and public organizations have operated the public-accessible open data portal systems since 2011. The number of open data and data type have been increasing every year. These trends are more expandable or extensible on mobile environments. The purpose of this study is to design and implement a mobile application service to visualize various typed or formatted public open data with geo-based images on the mobile web. Open data cover downloadable data sets or open-accessible data application programming interface API. Geo-based images mean multi-sensor satellite imageries which are referred in geo-coordinates and matched with digital map sets. System components for mobile service are fully based on open sources and open development environments without any commercialized tools: PostgreSQL for database management system, OTB for remote sensing image processing, GDAL for data conversion, GeoServer for application server, OpenLayers for mobile web mapping, R for data analysis and D3.js for web-based data graphic processing. Mobile application in client side was implemented by using HTML5 for cross browser and cross platform. The result shows many advantageous points such as linking open data and geo-based data, integrating open data and open source, and demonstrating mobile applications with open data. It is expected that this approach is cost effective and process efficient implementation strategy for intelligent earth observing data.

  10. Improved Functionality and Curation Support in the ADS

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin A.; Grant, Carolyn S.; Thompson, Donna; Chyla, Roman; Holachek, Alexandra; Sudilovsky, Vladimir; Murray, Stephen S.

    2015-01-01

    In this poster we describe the developments of the new ADS platform over the past year, focusing on the functionality which improves its discovery and curation capabilities.The ADS Application Programming Interface (API) is being updated to support authenticated access to the entire suite of ADS services, in addition to the search functionality itself. This allows programmatic access to resources which are specific to a user or class of users.A new interface, built directly on top of the API, now provides a more intuitive search experience and takes into account the best practices in web usability and responsive design. The interface now incorporates in-line views of graphics from the AAS Astroexplorer and the ADS All-Sky Survey image collections.The ADS Private Libraries, first introduced over 10 years ago, are now being enhanced to allow the bookmarking, tagging and annotation of records of interest. In addition, libraries can be shared with one or more ADS users, providing an easy way to collaborate in the curation of lists of papers. A library can also be explicitly made public and shared at large via the publishing of its URL.In collaboration with the AAS, the ADS plans to support the adoption of ORCID identifiers by implementing a plugin which will simplify the import of papers in ORCID via a query to the ADS API. Deeper integration between the two systems will depend on available resources and feedback from the community.

  11. AnnoLnc: a web server for systematically annotating novel human lncRNAs.

    PubMed

    Hou, Mei; Tang, Xing; Tian, Feng; Shi, Fangyuan; Liu, Fenglin; Gao, Ge

    2016-11-16

    Long noncoding RNAs (lncRNAs) have been shown to play essential roles in almost every important biological process through multiple mechanisms. Although the repertoire of human lncRNAs has rapidly expanded, their biological function and regulation remain largely elusive, calling for a systematic and integrative annotation tool. Here we present AnnoLnc ( http://annolnc.cbi.pku.edu.cn ), a one-stop portal for systematically annotating novel human lncRNAs. Based on more than 700 data sources and various tool chains, AnnoLnc enables a systematic annotation covering genomic location, secondary structure, expression patterns, transcriptional regulation, miRNA interaction, protein interaction, genetic association and evolution. An intuitive web interface is available for interactive analysis through both desktops and mobile devices, and programmers can further integrate AnnoLnc into their pipeline through standard JSON-based Web Service APIs. To the best of our knowledge, AnnoLnc is the only web server to provide on-the-fly and systematic annotation for newly identified human lncRNAs. Compared with similar tools, the annotation generated by AnnoLnc covers a much wider spectrum with intuitive visualization. Case studies demonstrate the power of AnnoLnc in not only rediscovering known functions of human lncRNAs but also inspiring novel hypotheses.

  12. GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.

    PubMed

    Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L

    2015-10-15

    A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.

  13. Solar Irradiance Data Products at the LASP Interactive Solar IRradiance Datacenter (LISIRD)

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Ware DeWolfe, A.; Wilson, A.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2011-12-01

    The Laboratory for Atmospheric and Space Physics (LASP) has developed the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/) web site to provide access to a comprehensive set of solar irradiance measurements and related datasets. Current data holdings include products from NASA missions SORCE, UARS, SME, and TIMED-SEE. The data provided covers a wavelength range from soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as Total Solar Irradiance (TSI). Other datasets include solar indices, spectral and flare models, solar images, and more. The LISIRD web site features updated plotting, browsing, and download capabilities enabled by dygraphs, JavaScript, and Ajax calls to the LASP Time Series Server (LaTiS). In addition to the web browser interface, most of the LISIRD datasets can be accessed via the LaTiS web service interface that supports the OPeNDAP standard. OPeNDAP clients and other programming APIs are available for making requests that subset, aggregate, or filter data on the server before it is transported to the user. This poster provides an overview of the LISIRD system, summarizes the datasets currently available, and provides details on how to access solar irradiance data products through LISIRD's interfaces.

  14. Incorporating Web 2.0 Technologies from an Organizational Perspective

    NASA Astrophysics Data System (ADS)

    Owens, R.

    2009-12-01

    The Arctic Research Consortium of the United States (ARCUS) provides support for the organization, facilitation, and dissemination of online educational and scientific materials and information to a wide range of stakeholders. ARCUS is currently weaving the fabric of Web 2.0 technologies—web development featuring interactive information sharing and user-centered design—into its structure, both as a tool for information management and for educational outreach. The importance of planning, developing, and maintaining a cohesive online platform in order to integrate data storage and dissemination will be discussed in this presentation, as well as some specific open source technologies and tools currently available, including: ○ Content Management: Any system set up to manage the content of web sites and services. Drupal is a content management system, built in a modular fashion allowing for a powerful set of features including, but not limited to weblogs, forums, event calendars, polling, and more. ○ Faceted Search: Combined with full text indexing, faceted searching allows site visitors to locate information quickly and then provides a set of 'filters' with which to narrow the search results. Apache Solr is a search server with a web-services like API (Application programming interface) that has built in support for faceted searching. ○ Semantic Web: The semantic web refers to the ongoing evolution of the World Wide Web as it begins to incorporate semantic components, which aid in processing requests. OpenCalais is a web service that uses natural language processing, along with other methods, in order to extract meaningful 'tags' from your content. This metadata can then be used to connect people, places, and things throughout your website, enriching the surfing experience for the end user. ○ Web Widgets: A web widget is a portable 'piece of code' that can be embedded easily into web pages by an end user. Timeline is a widget developed as part of the SIMILE project at MIT (Massachusetts Institute of Technology) for displaying time-based events in a clean, horizontal timeline display. Numerous standards, applications, and 3rd party integration services are also available for use in today's Web 2.0 environment. In addition to a cohesive online platform, the following tools can improve networking, information sharing, and increased scientific and educational collaboration: ○ Facebook (Fan pages, social networking, etc) ○ Twitter/Twitterfeed (Automatic updates in 3 steps) ○ Mobify.me (Mobile web) ○ Wimba, Adobe Connect, etc (real time conferencing) Increasingly, the scientific community is being asked to share data and information within and outside disciplines, with K-12 students, and with members of the public and policy-makers. Web 2.0 technologies can easily be set up and utilized to share data and other information to specific audiences in real time, and their simplicity ensures their increasing use by the science community in years to come.

  15. 49 CFR 195.307 - Pressure testing aboveground breakout tanks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... aboveground breakout tanks. (a) For aboveground breakout tanks built into API Specification 12F and first placed in service after October 2, 2000, pneumatic testing must be in accordance with section 5.3 of API Specification 12 F (incorporated by reference, see § 195.3). (b) For aboveground breakout tanks built to API...

  16. 49 CFR 195.405 - Protection against ignitions and safe access/egress involving floating roofs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accordance with API Recommended Practice 2003, unless the operator notes in the procedural manual (§ 195.402(c)) why compliance with all or certain provisions of API Recommended Practice 2003 is not necessary... removed from service for cleaning) are addressed in API Publication 2026. After October 2, 2000, the...

  17. 49 CFR 195.405 - Protection against ignitions and safe access/egress involving floating roofs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... accordance with API Recommended Practice 2003, unless the operator notes in the procedural manual (§ 195.402(c)) why compliance with all or certain provisions of API Recommended Practice 2003 is not necessary... removed from service for cleaning) are addressed in API Publication 2026. After October 2, 2000, the...

  18. 49 CFR 195.405 - Protection against ignitions and safe access/egress involving floating roofs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... accordance with API Recommended Practice 2003, unless the operator notes in the procedural manual (§ 195.402(c)) why compliance with all or certain provisions of API Recommended Practice 2003 is not necessary... removed from service for cleaning) are addressed in API Publication 2026. After October 2, 2000, the...

  19. 49 CFR 195.405 - Protection against ignitions and safe access/egress involving floating roofs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... accordance with API Recommended Practice 2003, unless the operator notes in the procedural manual (§ 195.402(c)) why compliance with all or certain provisions of API Recommended Practice 2003 is not necessary... removed from service for cleaning) are addressed in API Publication 2026. After October 2, 2000, the...

  20. 49 CFR 195.307 - Pressure testing aboveground breakout tanks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... aboveground breakout tanks. (a) For aboveground breakout tanks built into API Specification 12F and first placed in service after October 2, 2000, pneumatic testing must be in accordance with section 5.3 of API Specification 12 F (incorporated by reference, see § 195.3). (b) For aboveground breakout tanks built to API...

  1. 49 CFR 195.405 - Protection against ignitions and safe access/egress involving floating roofs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... accordance with API Recommended Practice 2003, unless the operator notes in the procedural manual (§ 195.402(c)) why compliance with all or certain provisions of API Recommended Practice 2003 is not necessary... removed from service for cleaning) are addressed in API Publication 2026. After October 2, 2000, the...

  2. 49 CFR 195.307 - Pressure testing aboveground breakout tanks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... aboveground breakout tanks. (a) For aboveground breakout tanks built into API Specification 12F and first placed in service after October 2, 2000, pneumatic testing must be in accordance with section 5.3 of API Specification 12 F (incorporated by reference, see § 195.3). (b) For aboveground breakout tanks built to API...

  3. 49 CFR 195.307 - Pressure testing aboveground breakout tanks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... aboveground breakout tanks. (a) For aboveground breakout tanks built into API Specification 12F and first placed in service after October 2, 2000, pneumatic testing must be in accordance with section 5.3 of API Specification 12 F (incorporated by reference, see § 195.3). (b) For aboveground breakout tanks built to API...

  4. The National 3-D Geospatial Information Web-Based Service of Korea

    NASA Astrophysics Data System (ADS)

    Lee, D. T.; Kim, C. W.; Kang, I. G.

    2013-09-01

    3D geospatial information systems should provide efficient spatial analysis tools and able to use all capabilities of the third dimension, and a visualization. Currently, many human activities make steps toward the third dimension like land use, urban and landscape planning, cadastre, environmental monitoring, transportation monitoring, real estate market, military applications, etc. To reflect this trend, the Korean government has been started to construct the 3D geospatial data and service platform. Since the geospatial information was introduced in Korea, the construction of geospatial information (3D geospatial information, digital maps, aerial photographs, ortho photographs, etc.) has been led by the central government. The purpose of this study is to introduce the Korean government-lead 3D geospatial information web-based service for the people who interested in this industry and we would like to introduce not only the present conditions of constructed 3D geospatial data but methodologies and applications of 3D geospatial information. About 15% (about 3,278.74 km2) of the total urban area's 3D geospatial data have been constructed by the national geographic information institute (NGII) of Korea from 2005 to 2012. Especially in six metropolitan cities and Dokdo (island belongs to Korea) on level of detail (LOD) 4 which is photo-realistic textured 3D models including corresponding ortho photographs were constructed in 2012. In this paper, we represented web-based 3D map service system composition and infrastructure and comparison of V-world with Google Earth service will be presented. We also represented Open API based service cases and discussed about the protection of location privacy when we construct 3D indoor building models. In order to prevent an invasion of privacy, we processed image blurring, elimination and camouflage. The importance of public-private cooperation and advanced geospatial information policy is emphasized in Korea. Thus, the progress of spatial information industry of Korea is expected in the near future.

  5. 49 CFR 195.307 - Pressure testing aboveground breakout tanks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... placed in service after October 2, 2000, pneumatic testing must be in accordance with section 5.3 of API Specification 12 F (incorporated by reference, see § 195.3). (b) For aboveground breakout tanks built to API Standard 620 and first placed in service after October 2, 2000, hydrostatic and pneumatic testing must be...

  6. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  7. The APIS service : a tool for accessing value-added HST planetary auroral observations over 1997-2015

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Henry, F.; Prangé, R.; Le Sidaner, P.

    2015-10-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service http://obspm.fr/apis/ provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro- imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria (Figure 1) and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multispectral combined analysis [1,2]. We will present the updated capabilities of APIS with several examples. Several tutorials are available online.

  8. Using Map Service API for Driving Cycle Detection for Wearable GPS Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Gonder, Jeffrey D

    Following advancements in smartphone and portable global positioning system (GPS) data collection, wearable GPS data have realized extensive use in transportation surveys and studies. The task of detecting driving cycles (driving or car-mode trajectory segments) from wearable GPS data has been the subject of much research. Specifically, distinguishing driving cycles from other motorized trips (such as taking a bus) is the main research problem in this paper. Many mode detection methods only focus on raw GPS speed data while some studies apply additional information, such as geographic information system (GIS) data, to obtain better detection performance. Procuring and maintaining dedicatedmore » road GIS data are costly and not trivial, whereas the technical maturity and broad use of map service application program interface (API) queries offers opportunities for mode detection tasks. The proposed driving cycle detection method takes advantage of map service APIs to obtain high-quality car-mode API route information and uses a trajectory segmentation algorithm to find the best-matched API route. The car-mode API route data combined with the actual route information, including the actual mode information, are used to train a logistic regression machine learning model, which estimates car modes and non-car modes with probability rates. The experimental results show promise for the proposed method's ability to detect vehicle mode accurately.« less

  9. The Joy of Playing with Oceanographic Data

    NASA Astrophysics Data System (ADS)

    Smith, A. T.; Xing, Z.; Armstrong, E. M.; Thompson, C. K.; Huang, T.

    2013-12-01

    The web is no longer just an after thought. It is no longer just a presentation layer filled with HTML, CSS, JavaScript, Frameworks, 3D, and more. It has become the medium of our communication. It is the database of all databases. It is the computing platform of all platforms. It has transformed the way we do science. Web service is the de facto method for communication between machines over the web. Representational State Transfer (REST) has standardized the way we architect services and their interfaces. In the Earth Science domain, we are familiar with tools and services such as Open-Source Project for Network Data Access Protocol (OPeNDAP), Thematic Realtime Environmental Distributed Data Services (THREDDS), and Live Access Server (LAS). We are also familiar with various data formats such as NetCDF3/4, HDF4/5, GRIB, TIFF, etc. One of the challenges for the Earth Science community is accessing information within these data. There are community-accepted readers that our users can download and install. However, the Application Programming Interface (API) between these readers is not standardized, which leads to non-portable applications. Webification (w10n) is an emerging technology, developed at the Jet Propulsion Laboratory, which exploits the hierarchical nature of a science data artifact to assign a URL to each element within the artifact. (e.g. a granule file). By embracing standards such as JSON, XML, and HTML5 and predictable URL, w10n provides a simple interface that enables tool-builders and researchers to develop portable tools/applications to interact with artifacts of various formats. The NASA Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is the designated data center for observational products relevant to the physical state of the ocean. Over the past year PO.DAAC has been evaluating w10n technology by webifying its archive holdings to provide simplified access to oceanographic science artifacts and as a service to enable future tools and services development. In this talk, we will focus on a w10n-based system called Distributed Oceanographic Webification Service (DOWS) being developed at PO.DAAC to provide a newer and simpler method for working with observational data artifacts. As a continued effort at PO.DAAC to provide better tools and services to visualize our data, the talk will discuss the latest in web-based data visualization tools/frameworks (such as d3.js, Three.js, Leaflet.js, and more) and techniques for working with webified oceanographic science data in both a 2D and 3D web approach.

  10. MSeqDR mvTool: A mitochondrial DNA Web and API resource for comprehensive variant annotation, universal nomenclature collation, and reference genome conversion.

    PubMed

    Shen, Lishuang; Attimonelli, Marcella; Bai, Renkui; Lott, Marie T; Wallace, Douglas C; Falk, Marni J; Gai, Xiaowu

    2018-06-01

    Accurate mitochondrial DNA (mtDNA) variant annotation is essential for the clinical diagnosis of diverse human diseases. Substantial challenges to this process include the inconsistency in mtDNA nomenclatures, the existence of multiple reference genomes, and a lack of reference population frequency data. Clinicians need a simple bioinformatics tool that is user-friendly, and bioinformaticians need a powerful informatics resource for programmatic usage. Here, we report the development and functionality of the MSeqDR mtDNA Variant Tool set (mvTool), a one-stop mtDNA variant annotation and analysis Web service. mvTool is built upon the MSeqDR infrastructure (https://mseqdr.org), with contributions of expert curated data from MITOMAP (https://www.mitomap.org) and HmtDB (https://www.hmtdb.uniba.it/hmdb). mvTool supports all mtDNA nomenclatures, converts variants to standard rCRS- and HGVS-based nomenclatures, and annotates novel mtDNA variants. Besides generic annotations from dbNSFP and Variant Effect Predictor (VEP), mvTool provides allele frequencies in more than 47,000 germline mitogenomes, and disease and pathogenicity classifications from MSeqDR, Mitomap, HmtDB and ClinVar (Landrum et al., 2013). mvTools also provides mtDNA somatic variants annotations. "mvTool API" is implemented for programmatic access using inputs in VCF, HGVS, or classical mtDNA variant nomenclatures. The results are reported as hyperlinked html tables, JSON, Excel, and VCF formats. MSeqDR mvTool is freely accessible at https://mseqdr.org/mvtool.php. © 2018 Wiley Periodicals, Inc.

  11. WebDMS: A Web-Based Data Management System for Environmental Data

    NASA Astrophysics Data System (ADS)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  12. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions

    PubMed Central

    Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.

    2014-01-01

    Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498

  13. Web GIS in practice III: creating a simple interactive map of England's Strategic Health Authorities using Google Maps API, Google Earth KML, and MSN Virtual Earth Map Control

    PubMed Central

    Boulos, Maged N Kamel

    2005-01-01

    This eye-opener article aims at introducing the health GIS community to the emerging online consumer geoinformatics services from Google and Microsoft (MSN), and their potential utility in creating custom online interactive health maps. Using the programmable interfaces provided by Google and MSN, we created three interactive demonstrator maps of England's Strategic Health Authorities. These can be browsed online at – Google Maps API (Application Programming Interface) version, – Google Earth KML (Keyhole Markup Language) version, and – MSN Virtual Earth Map Control version. Google and MSN's worldwide distribution of "free" geospatial tools, imagery, and maps is to be commended as a significant step towards the ultimate "wikification" of maps and GIS. A discussion is provided of these emerging online mapping trends, their expected future implications and development directions, and associated individual privacy, national security and copyrights issues. Although ESRI have announced their planned response to Google (and MSN), it remains to be seen how their envisaged plans will materialize and compare to the offerings from Google and MSN, and also how Google and MSN mapping tools will further evolve in the near future. PMID:16176577

  14. Ground System Architectures Workshop GMSEC SERVICES SUITE (GSS): an Agile Development Story

    NASA Technical Reports Server (NTRS)

    Ly, Vuong

    2017-01-01

    The GMSEC (Goddard Mission Services Evolution Center) Services Suite (GSS) is a collection of tools and software services along with a robust customizable web-based portal that enables the user to capture, monitor, report, and analyze system-wide GMSEC data. Given our plug-and-play architecture and the needs for rapid system development, we opted to follow the Scrum Agile Methodology for software development. Being one of the first few projects to implement the Agile methodology at NASA GSFC, in this presentation we will present our approaches, tools, successes, and challenges in implementing this methodology. The GMSEC architecture provides a scalable, extensible ground and flight system for existing and future missions. GMSEC comes with a robust Application Programming Interface (GMSEC API) and a core set of Java-based GMSEC components that facilitate the development of a GMSEC-based ground system. Over the past few years, we have seen an upbeat in the number of customers who are moving from a native desktop application environment to a web based environment particularly for data monitoring and analysis. We also see a need to provide separation of the business logic from the GUI display for our Java-based components and also to consolidate all the GUI displays into one interface. This combination of separation and consolidation brings immediate value to a GMSEC-based ground system through increased ease of data access via a uniform interface, built-in security measures, centralized configuration management, and ease of feature extensibility.

  15. Creating and Sharing Understanding: GEOSS and ArcGIS Online

    NASA Astrophysics Data System (ADS)

    White, C. E.; Hogeweg, M.; Foust, J.

    2014-12-01

    The GEOSS program brokers various forms of earth observation data and information via its online platform Discovery and Access Broker (DAB). The platform connects relevant information systems and infrastructures through the world. Esri and the National Research Council of Italy Institute of Atmospheric Pollution Research (CNR-IIA) are building two-way technology between DAB framework and ArcGIS Online using the ArcGIS Online API. Developers will engineer Esri and DAB interfaces and build interoperable web services that connect the two systems. This collaboration makes GEOSS earth observation data and services available to the ArcGIS Online community, and ArcGIS Online a significant part of the GEOSS DAB infrastructure. ArcGIS Online subscribers can discover and access the resources published by GEOSS, use GEOSS data services, and build applications. Making GEOSS content available in ArcGIS Online increases opportunities for scientists in other communities to visualize information in greater context. Moreover, because the platform supports authoritative and crowd-sourcing information, GEOSS members can build networks into other disciplines. This talk will discuss the power of interoperable service architectures that make such a collaboration possible, and the results thus far.

  16. The NERC Vocabulary Server: Version 2.0

    NASA Astrophysics Data System (ADS)

    Leadbetter, A. M.; Lowry, R. K.

    2012-12-01

    The Natural Environment Research Council (NERC) Vocabulary Server (NVS) has been used to publish controlled vocabularies of terms relevant to marine environmental sciences since 2006 (version 0) with version 1 being introduced in 2007. It has been used for - metadata mark-up with verifiable content - populating dynamic drop down lists - semantic cross-walk between metadata schemata - so-called smart search - and the semantic enablement of Open Geospatial Consortium (OGC) Web Processing Services in the NERC Data Grid and the European Commission SeaDataNet, Geo-Seas, and European Marine Observation and Data Network (EMODnet) projects. The NVS is based on the Simple Knowledge Organization System (SKOS) model. SKOS is based on the "concept", which it defines as a "unit of thought", that is an idea or notion such as "oil spill". Following a version change for SKOS in 2009 there was a desire to upgrade the NVS to incorporate the changes. This version of SKOS introduces the ability to aggregate concepts in both collections and schemes. The design of version 2 of the NVS uses both types of aggregation: schemes for the discovery of content through hierarchical thesauri and collections for the publication and addressing of content. Other desired changes from version 1 of the NVS included: - the removal of the potential for multiple identifiers for the same concept to ensure consistent addressing of concepts - the addition of content and technical governance information in the payload documents to provide an audit trail to users of NVS content - the removal of XML snippets from concept definitions in order to correctly validate XML serializations of the SKOS - the addition of the ability to map into external knowledge organization systems in order to extend the knowledge base - a more truly RESTful approach URL access to the NVS to make the development of applications on top of the NVS easier - and support for multiple human languages to increase the user base of the NVS Version 2 of the NVS (NVS2.0) underpins the semantic layer for the Open Service Network for Marine Environmental Data (NETMAR) project, funded by the European Commission under the Seventh Framework Programme. Within NETMAR, NVS2.0 has been used for: - semantic validation of inputs to chained OGC Web Processing Services - smart discovery of data and services - integration of data from distributed nodes of the International Coastal Atlas Network Since its deployment, NVS2.0 has been adopted within the European SeaDataNet community's software products which has significantly increased the usage of the NVS2.0 Application Programming Interace (API), as illustrated in Table 1. Here we present the results of upgrading the NVS to version 2 and show applications which have been built on top of the NVS2.0 API, including a SPARQL endpoint and a hierarchical catalogue of oceanographic hardware.Table 1. NVS2.0 API usage by month from 467 unique IP addressest;

  17. Open Astronomy Catalogs API

    NASA Astrophysics Data System (ADS)

    Guillochon, James; Cowperthwaite, Philip S.

    2018-05-01

    We announce the public release of the application program interface (API) for the Open Astronomy Catalogs (OACs), the OACAPI. The OACs serve near-complete collections of supernova, tidal disruption, kilonova, and fast stars data (including photometry, spectra, radio, and X-ray observations) via a user-friendly web interface that displays the data interactively and offers full data downloads. The OACAPI, by contrast, enables users to specifically download particular pieces of the OAC dataset via a flexible programmatic syntax, either via URL GET requests, or via a module within the astroquery Python package.

  18. Determining Appropriate Coupling between User Experiences and Earth Science Data Services

    NASA Astrophysics Data System (ADS)

    Moghaddam-Taaheri, E.; Pilone, D.; Newman, D. J.; Mitchell, A. E.; Goff, T. D.; Baynes, K.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. Reverb exposes ECHO's capabilities through an interactive, Web 2.0 application designed around searching for Earth Science data and downloading or ordering data of interest. ECHO and Reverb have supported the concept of Earth Science data services for several years but only for discovery. Invocation of these services was not a primary capability of the user experience. As more and more Earth Science data moves online and away from the concept of data ordering, progress has been made in making on demand services available for directly accessed data. These concepts have existed through access mechanisms such as OPeNDAP but are proliferating to accommodate a wider variety of services and service providers. Recently, the EOSDIS Service Interface (ESI) was defined and integrated into the ECS system. The ESI allows data providers to expose a wide variety of service capabilities including reprojection, reformatting, spatial and band subsetting, and resampling. ECHO and Reverb were tasked with making these services available to end-users in a meaningful and usable way that integrated into its existing search and ordering workflow. This presentation discusses the challenges associated with exposing disparate service capabilities while presenting a meaningful and cohesive user experience. Specifically, we'll discuss: - Benefits and challenges of tightly coupling the user interface with underlying services - Approaches to generic service descriptions - Approaches to dynamic user interfaces that better describe service capabilities while minimizing application coupling - Challenges associated with traditional WSDL / UDDI style service descriptions - Walkthrough of the solution used by ECHO and Reverb to integrate and expose ESI compliant services to our users

  19. SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure

    NASA Astrophysics Data System (ADS)

    Politi, R.; Capaccioni, F.; Giardino, M.; Fonte, S.; Capria, M. T.; Turrini, D.; De Sanctis, M. C.; Piccioni, G.

    2018-04-01

    Description of SeaBIRD (Searchable and Browsable Infrastructure for Repository of Data), a software and hardware infrastructure for multi-mission planetary datamining, with web-based GUI and API set for the integration in users' software.

  20. An Approach to Function Annotation for Proteins of Unknown Function (PUFs) in the Transcriptome of Indian Mulberry.

    PubMed

    Dhanyalakshmi, K H; Naika, Mahantesha B N; Sajeevan, R S; Mathew, Oommen K; Shafi, K Mohamed; Sowdhamini, Ramanathan; N Nataraja, Karaba

    2016-01-01

    The modern sequencing technologies are generating large volumes of information at the transcriptome and genome level. Translation of this information into a biological meaning is far behind the race due to which a significant portion of proteins discovered remain as proteins of unknown function (PUFs). Attempts to uncover the functional significance of PUFs are limited due to lack of easy and high throughput functional annotation tools. Here, we report an approach to assign putative functions to PUFs, identified in the transcriptome of mulberry, a perennial tree commonly cultivated as host of silkworm. We utilized the mulberry PUFs generated from leaf tissues exposed to drought stress at whole plant level. A sequence and structure based computational analysis predicted the probable function of the PUFs. For rapid and easy annotation of PUFs, we developed an automated pipeline by integrating diverse bioinformatics tools, designated as PUFs Annotation Server (PUFAS), which also provides a web service API (Application Programming Interface) for a large-scale analysis up to a genome. The expression analysis of three selected PUFs annotated by the pipeline revealed abiotic stress responsiveness of the genes, and hence their potential role in stress acclimation pathways. The automated pipeline developed here could be extended to assign functions to PUFs from any organism in general. PUFAS web server is available at http://caps.ncbs.res.in/pufas/ and the web service is accessible at http://capservices.ncbs.res.in/help/pufas.

  1. uPy: a ubiquitous CG Python API with biological-modeling applications.

    PubMed

    Autin, Ludovic; Johnson, Graham; Hake, Johan; Olson, Arthur; Sanner, Michel

    2012-01-01

    The uPy Python extension module provides a uniform abstraction of the APIs of several 3D computer graphics programs (called hosts), including Blender, Maya, Cinema 4D, and DejaVu. A plug-in written with uPy can run in all uPy-supported hosts. Using uPy, researchers have created complex plug-ins for molecular and cellular modeling and visualization. uPy can simplify programming for many types of projects (not solely science applications) intended for multihost distribution. It's available at http://upy.scripps.edu. The first featured Web extra is a video that shows interactive analysis of a calcium dynamics simulation. YouTube URL: http://youtu.be/wvs-nWE6ypo. The second featured Web extra is a video that shows rotation of the HIV virus. YouTube URL: http://youtu.be/vEOybMaRoKc.

  2. Extensible Probabilistic Repository Technology (XPRT)

    DTIC Science & Technology

    2004-10-01

    projects, such as, Centaurus , Evidence Data Base (EDB), etc., others were fabricated, such as INS and FED, while others contain data from the open...Google Web Report Unlimited SOAP API News BBC News Unlimited WEB RSS 1.0 Centaurus Person Demographics 204,402 people from 240 countries...objects of the domain ontology map to the various simulated data-sources. For example, the PersonDemographics are stored in the Centaurus database, while

  3. The Geant4 physics validation repository

    NASA Astrophysics Data System (ADS)

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-01

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. The functionality of these components and the technology choices we made are also described.

  4. Application of Microsoft's ActiveX and DirectX technologies to the visulization of physical system dynamics

    NASA Astrophysics Data System (ADS)

    Mann, Christopher; Narasimhamurthi, Natarajan

    1998-08-01

    This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.

  5. Expanding the use of Scientific Data through Maps and Apps

    NASA Astrophysics Data System (ADS)

    Shrestha, S. R.; Zimble, D. A.; Herring, D.; Halpert, M.

    2014-12-01

    The importance of making scientific data more available can't be overstated. There is a wealth of useful scientific data available and demand for this data is only increasing; however, applying scientific data towards practical uses poses several technical challenges. These challenges can arise from difficulty in handling the data due largely to 1) the complexity, variety and volume of scientific data and 2) applying and operating the techniques and tools needed to visualize and analyze the data. As a result, the combined knowledge required to take advantage of these data requires highly specialized skill sets that in total, limit the ability of scientific data from being used in more practical day-to-day decision making activities. While these challenges are daunting, information technologies do exist that can help mitigate some of these issues. Many organizations for years have already been enjoying the benefits of modern service oriented architectures (SOAs) for everyday enterprise tasks. We can use this approach to modernize how we share and access our scientific data where much of the specialized tools and techniques needed to handle and present scientific data can be automated and executed by servers and done so in an appropriate way. We will discuss and show an approach for preparing file based scientific data (e.g. GRIB, netCDF) for use in standard based scientific web services. These scientific web services are able to encapsulate the logic needed to handle and describe scientific data through a variety of service types including, image, map, feature, geoprocessing, and their respective service methods. By combining these types of services and leveraging well-documented and modern web development APIs, we can afford to focus our attention on the design and development of user-friendly maps and apps. Our scenario will include developing online maps through these services by integrating various forecast data from the Climate Forecast System (CFSv2). This presentation showcases a collaboration between the National Oceanic and Atmospheric Administration's (NOAA) Climate.gov portal, Climate Prediction Center and Esri, Inc. on the implementation of the ArcGIS platform, which is aimed at helping modernize scientific data access through a service oriented architecture.

  6. Data Services in Support of High Performance Computing-Based Distributed Hydrologic Models

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Gichamo, T.; Yildirim, A. A.; Jones, N.

    2014-12-01

    We have developed web-based data services to support the application of hydrologic models on High Performance Computing (HPC) systems. The purposes of these services are to provide hydrologic researchers, modelers, water managers, and users access to HPC resources without requiring them to become HPC experts and understanding the intrinsic complexities of the data services, so as to reduce the amount of time and effort spent in finding and organizing the data required to execute hydrologic models and data preprocessing tools on HPC systems. These services address some of the data challenges faced by hydrologic models that strive to take advantage of HPC. Needed data is often not in the form needed by such models, requiring researchers to spend time and effort on data preparation and preprocessing that inhibits or limits the application of these models. Another limitation is the difficult to use batch job control and queuing systems used by HPC systems. We have developed a REST-based gateway application programming interface (API) for authenticated access to HPC systems that abstracts away many of the details that are barriers to HPC use and enhances accessibility from desktop programming and scripting languages such as Python and R. We have used this gateway API to establish software services that support the delineation of watersheds to define a modeling domain, then extract terrain and land use information to automatically configure the inputs required for hydrologic models. These services support the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation and generation of hydrology-based terrain information such as wetness index and stream networks. These services also support the derivation of inputs for the Utah Energy Balance snowmelt model used to address questions such as how climate, land cover and land use change may affect snowmelt inputs to runoff generation. To enhance access to the time varying climate data used to drive hydrologic models, we have developed services to downscale and re-grid nationally available climate analysis data from systems such as NLDAS and MERRA. These cases serve as examples for how this approach can be extended to other models to enhance the use of HPC for hydrologic modeling.

  7. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  8. Brandenburg 3D - a comprehensive 3D Subsurface Model, Conception of an Infrastructure Node and a Web Application

    NASA Astrophysics Data System (ADS)

    Kerschke, Dorit; Schilling, Maik; Simon, Andreas; Wächter, Joachim

    2014-05-01

    The Energiewende and the increasing scarcity of raw materials will lead to an intensified utilization of the subsurface in Germany. Within this context, geological 3D modeling is a fundamental approach for integrated decision and planning processes. Initiated by the development of the European Geospatial Infrastructure INSPIRE, the German State Geological Offices started digitizing their predominantly analog archive inventory. Until now, a comprehensive 3D subsurface model of Brandenburg did not exist. Therefore the project B3D strived to develop a new 3D model as well as a subsequent infrastructure node to integrate all geological and spatial data within the Geodaten-Infrastruktur Brandenburg (Geospatial Infrastructure, GDI-BB) and provide it to the public through an interactive 2D/3D web application. The functionality of the web application is based on a client-server architecture. Server-sided, all available spatial data is published through GeoServer. GeoServer is designed for interoperability and acts as the reference implementation of the Open Geospatial Consortium (OGC) Web Feature Service (WFS) standard that provides the interface that allows requests for geographical features. In addition, GeoServer implements, among others, the high performance certified compliant Web Map Service (WMS) that serves geo-referenced map images. For publishing 3D data, the OGC Web 3D Service (W3DS), a portrayal service for three-dimensional geo-data, is used. The W3DS displays elements representing the geometry, appearance, and behavior of geographic objects. On the client side, the web application is solely based on Free and Open Source Software and leans on the JavaScript API WebGL that allows the interactive rendering of 2D and 3D graphics by means of GPU accelerated usage of physics and image processing as part of the web page canvas without the use of plug-ins. WebGL is supported by most web browsers (e.g., Google Chrome, Mozilla Firefox, Safari, and Opera). The web application enables an intuitive navigation through all available information and allows the visualization of geological maps (2D), seismic transects (2D/3D), wells (2D/3D), and the 3D-model. These achievements will alleviate spatial and geological data management within the German State Geological Offices and foster the interoperability of heterogeneous systems. It will provide guidance to a systematic subsurface management across system, domain and administrative boundaries on the basis of a federated spatial data infrastructure, and include the public in the decision processes (e-Governance). Yet, the interoperability of the systems has to be strongly propelled forward through agreements on standards that need to be decided upon in responsible committees. The project B3D is funded with resources from the European Fund for Regional Development (EFRE).

  9. Synergistic effects of non-Apis bees and honey bees for pollination services

    PubMed Central

    Brittain, Claire; Williams, Neal; Kremen, Claire; Klein, Alexandra-Maria

    2013-01-01

    In diverse pollinator communities, interspecific interactions may modify the behaviour and increase the pollination effectiveness of individual species. Because agricultural production reliant on pollination is growing, improving pollination effectiveness could increase crop yield without any increase in agricultural intensity or area. In California almond, a crop highly dependent on honey bee pollination, we explored the foraging behaviour and pollination effectiveness of honey bees in orchards with simple (honey bee only) and diverse (non-Apis bees present) bee communities. In orchards with non-Apis bees, the foraging behaviour of honey bees changed and the pollination effectiveness of a single honey bee visit was greater than in orchards where non-Apis bees were absent. This change translated to a greater proportion of fruit set in these orchards. Our field experiments show that increased pollinator diversity can synergistically increase pollination service, through species interactions that alter the behaviour and resulting functional quality of a dominant pollinator species. These results of functional synergy between species were supported by an additional controlled cage experiment with Osmia lignaria and Apis mellifera. Our findings highlight a largely unexplored facilitative component of the benefit of biodiversity to ecosystem services, and represent a way to improve pollinator-dependent crop yields in a sustainable manner. PMID:23303545

  10. GIS Technologies For The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Docasal, R.; Barbarisi, I.; Rios, C.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; De Marchi, G.; Martinez, S.; Grotheer, E.; Lim, T.; Besse, S.; Heather, D.; Fraga, D.; Barthelemy, M.

    2015-12-01

    Geographical information system (GIS) is becoming increasingly used for planetary science. GIS are computerised systems for the storage, retrieval, manipulation, analysis, and display of geographically referenced data. Some data stored in the Planetary Science Archive (PSA), for instance, a set of Mars Express/Venus Express data, have spatial metadata associated to them. To facilitate users in handling and visualising spatial data in GIS applications, the new PSA should support interoperability with interfaces implementing the standards approved by the Open Geospatial Consortium (OGC). These standards are followed in order to develop open interfaces and encodings that allow data to be exchanged with GIS Client Applications, well-known examples of which are Google Earth and NASA World Wind as well as open source tools such as Openlayers. The technology already exists within PostgreSQL databases to store searchable geometrical data in the form of the PostGIS extension. An existing open source maps server is GeoServer, an instance of which has been deployed for the new PSA, uses the OGC standards to allow, among others, the sharing, processing and editing of data and spatial data through the Web Feature Service (WFS) standard as well as serving georeferenced map images through the Web Map Service (WMS). The final goal of the new PSA, being developed by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is to create an archive which enables science exploitation of ESA's planetary missions datasets. This can be facilitated through the GIS framework, offering interfaces (both web GUI and scriptable APIs) that can be used more easily and scientifically by the community, and that will also enable the community to build added value services on top of the PSA.

  11. From Analysis to Impact: Challenges and Outcomes from Google's Cloud-based Platforms for Analyzing and Leveraging Petapixels of Geospatial Data

    NASA Astrophysics Data System (ADS)

    Thau, D.

    2017-12-01

    For the past seven years, Google has made petabytes of Earth observation data, and the tools to analyze it, freely available to researchers around the world via cloud computing. These data and tools were initially available via Google Earth Engine and are increasingly available on the Google Cloud Platform. We have introduced a number of APIs for both the analysis and presentation of geospatial data that have been successfully used to create impactful datasets and web applications, including studies of global surface water availability, global tree cover change, and crop yield estimation. Each of these projects used the cloud to analyze thousands to millions of Landsat scenes. The APIs support a range of publishing options, from outputting imagery and data for inclusion in papers, to providing tools for full scale web applications that provide analysis tools of their own. Over the course of developing these tools, we have learned a number of lessons about how to build a publicly available cloud platform for geospatial analysis, and about how the characteristics of an API can affect the kinds of impacts a platform can enable. This study will present an overview of how Google Earth Engine works and how Google's geospatial capabilities are extending to Google Cloud Platform. We will provide a number of case studies describing how these platforms, and the data they host, have been leveraged to build impactful decision support tools used by governments, researchers, and other institutions, and we will describe how the available APIs have shaped (or constrained) those tools. [Image Credit: Tyler A. Erickson

  12. caCORE: a common infrastructure for cancer informatics.

    PubMed

    Covitz, Peter A; Hartel, Frank; Schaefer, Carl; De Coronado, Sherri; Fragoso, Gilberto; Sahni, Himanso; Gustafson, Scott; Buetow, Kenneth H

    2003-12-12

    Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture, data sources and APIs. Updated information appears on a regular basis on the caCORE web site (http://ncicb.nci.nih.gov/core).

  13. A Knowledge Portal and Collaboration Environment for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    D'Agnese, F. A.

    2008-12-01

    Earth Knowledge is developing a web-based 'Knowledge Portal and Collaboration Environment' that will serve as the information-technology-based foundation of a modular Internet-based Earth-Systems Monitoring, Analysis, and Management Tool. This 'Knowledge Portal' is essentially a 'mash- up' of web-based and client-based tools and services that support on-line collaboration, community discussion, and broad public dissemination of earth and environmental science information in a wide-area distributed network. In contrast to specialized knowledge-management or geographic-information systems developed for long- term and incremental scientific analysis, this system will exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize existing environmental datasets using Google Earth and Google Maps. An early form of these tools and services is being used by Earth Knowledge to facilitate the investigations and conversations of scientists, resource managers, and citizen-stakeholders addressing water resource sustainability issues in the Great Basin region of the desert southwestern United States. These ongoing projects will serve as use cases for the further development of this information-technology infrastructure. This 'Knowledge Portal' will accelerate the deployment of Earth- system data and information into an operational knowledge management system that may be used by decision-makers concerned with stewardship of water resources in the American Desert Southwest.

  14. Providing Access to a Diverse Set of Global Reanalysis Dataset Collections

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2015-12-01

    The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA, http://rda.ucar.edu) provides open access to a variety of global reanalysis dataset collections to support atmospheric and related sciences research worldwide. These include products from the European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), National Oceanic and Atmospheric Administration (NOAA), and NCAR.All RDA hosted reanalysis collections are freely accessible to registered users through a variety of methods. Standard access methods include traditional browser and scripted HTTP file download. Enhanced downloads are available through the Globus GridFTP "fire and forget" data transfer service, which provides an efficient, reliable, and preferred alternative to traditional HTTP-based methods. For those that favor interoperable access using compatible tools, the Unidata THREDDS Data server provides remote access to complete reanalysis collections by virtual dataset aggregation "files". Finally, users can request data subsets and format conversions to be prepared for them through web interface form requests or web service API batch requests. This approach uses NCAR HPC and central file systems to effectively prepare products from the high-resolution and very large reanalyses archives. The presentation will include a detailed inventory of all RDA reanalysis dataset collection holdings, and highlight access capabilities to these collections through use case examples.

  15. Participating in the Geospatial Web: Collaborative Mapping, Social Networks and Participatory GIS

    NASA Astrophysics Data System (ADS)

    Rouse, L. Jesse; Bergeron, Susan J.; Harris, Trevor M.

    In 2005, Google, Microsoft and Yahoo! released free Web mapping applications that opened up digital mapping to mainstream Internet users. Importantly, these companies also released free APIs for their platforms, allowing users to geo-locate and map their own data. These initiatives have spurred the growth of the Geospatial Web and represent spatially aware online communities and new ways of enabling communities to share information from the bottom up. This chapter explores how the emerging Geospatial Web can meet some of the fundamental needs of Participatory GIS projects to incorporate local knowledge into GIS, as well as promote public access and collaborative mapping.

  16. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  17. Preparing WIND for the STEREO Mission

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.

    2006-05-01

    The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  18. Visualization of Vgi Data Through the New NASA Web World Wind Virtual Globe

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Kilsedar, C. E.; Zamboni, G.

    2016-06-01

    GeoWeb 2.0, laying the foundations of Volunteered Geographic Information (VGI) systems, has led to platforms where users can contribute to the geographic knowledge that is open to access. Moreover, as a result of the advancements in 3D visualization, virtual globes able to visualize geographic data even on browsers emerged. However the integration of VGI systems and virtual globes has not been fully realized. The study presented aims to visualize volunteered data in 3D, considering also the ease of use aspects for general public, using Free and Open Source Software (FOSS). The new Application Programming Interface (API) of NASA, Web World Wind, written in JavaScript and based on Web Graphics Library (WebGL) is cross-platform and cross-browser, so that the virtual globe created using this API can be accessible through any WebGL supported browser on different operating systems and devices, as a result not requiring any installation or configuration on the client-side, making the collected data more usable to users, which is not the case with the World Wind for Java as installation and configuration of the Java Virtual Machine (JVM) is required. Furthermore, the data collected through various VGI platforms might be in different formats, stored in a traditional relational database or in a NoSQL database. The project developed aims to visualize and query data collected through Open Data Kit (ODK) platform and a cross-platform application, where data is stored in a relational PostgreSQL and NoSQL CouchDB databases respectively.

  19. The Geant4 physics validation repository

    DOE PAGES

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-23

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. Lastly, the functionality of these components and the technology choices we made are also described

  20. The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2017-12-01

    NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing

  1. PrismTech Data Distribution Service Java API Evaluation

    NASA Technical Reports Server (NTRS)

    Riggs, Cortney

    2008-01-01

    My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.

  2. ODM2 Admin Pilot Project- a Data Management Application for Observations of the Critical Zone.

    NASA Astrophysics Data System (ADS)

    Leon, M.; McDowell, W. H.; Mayorga, E.; Setiawan, L.; Hooper, R. P.

    2017-12-01

    ODM2 Admin is a tool to manage data stored in a relational database using the Observation Data Model 2 (ODM2) information model. Originally developed by the Luquillo Critical Zone Observatory (CZO) to manage a wide range of Earth observations, it has now been deployed at 6 projects: the Catalina Jemez CZO, the Dry Creek Experimental Forest, Au Sable and Manistee River sites managed by Michigan State, Tropical Response to Altered Climate Experiment (TRACE) and the Critical Zone Integrative Microbial Ecology Activity (CZIMEA) EarthCube project; most of these deployments are hosted on a Microsoft Azure cloud server managed by CUAHSI. ODM2 Admin is a web application built on the Python open-source Django framework and available for download from GitHub and DockerHub. It provides tools for data ingestion, editing, QA/QC, data visualization, browsing, mapping and documentation of equipment deployment, methods, and citations. Additional features include the ability to generate derived data values, automatically or manually create data annotations and create datasets from arbitrary groupings of results. Over 22 million time series values for more than 600 time series are being managed with ODM2 Admin across the 6 projects as well as more than 12,000 soil profiles and other measurements. ODM2 Admin links with external identifier systems through DOIs, ORCiDs and IGSNs, so cited works, details about researchers and earth sample meta-data can be accessed directly from ODM2 Admin. This application is part of a growing open source ODM2 application ecosystem under active development. ODM2 Admin can be deployed alongside other tools from the ODM2 ecosystem, including ODM2API and WOFpy, which provide access to the underlying ODM2 data through a Python API and Water One Flow web services.

  3. Analyzing CRISM hyperspectral imagery using PlanetServer.

    NASA Astrophysics Data System (ADS)

    Figuera, Ramiro Marco; Pham Huu, Bang; Minin, Mikhail; Flahaut, Jessica; Halder, Anik; Rossi, Angelo Pio

    2017-04-01

    Mineral characterization of planetary surfaces bears great importance for space exploration. In order to perform it, orbital hyperspectral imagery is widely used. In our research we use Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) [1] TRDR L observations with a spectral range of 1 to 4 µm. PlanetServer comprises a server, a web client and a Python client/API. The server side uses the Array DataBase Management System (DBMS) Raster Data Manager (Rasdaman) Community Edition [2]. OGC standards such as the Web Coverage Processing Service (WCPS) [3], an SQL-like language capable to query information along the image cube, are implemented in the PetaScope component [4]. The client side uses NASA's Web World Wind [5] allowing the user to access the data in an intuitive way. The client consists of a globe where all cubes are deployed, a main menu where projections, base maps and RGB combinations are provided, and a plot dock where the spectral information is shown. The RGB combinator tool allows to do band combination such as the CRISM products [6] using WCPS. The spectral information is retrieved using WCPS and shown in the plot dock/widget. The USGS splib06a library [7] is available to compare CRISM vs. laboratory spectra. The Python API provides an environment to create RGB combinations that can be embedded into existing pipelines. All employed libraries and tools are open source and can be easily adapted to other datasets. PlanetServer stands as a promising tool for spectral analysis on planetary bodies. M3/Moon and OMEGA datasets will be soon available. [1] S. Murchie et al., "Compact Connaissance Imaging Spectrometer for Mars (CRISM) on Mars Reconnaissance Orbiter (MRO)," J. Geophys. Res. E Planets,2007. [2] P. Baumann, A. Dehmel, P. Furtado, R. Ritsch, and N. Widmann, "The multidimensional database system RasDaMan," ACM SIGMOD Rec., vol. 27, no. 2, pp. 575-577, Jun. 1998. [3] P. Baumann, "The OGC web coverage processing service (WCPS) standard," Geoinformatica, vol. 14, no. 4, Jul. 2010. [4] A. Aiordǎchioaie and P. Baumann, "PetaScope: An open-source implementation of the OGC WCS Geo service standards suite," Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6187 LNCS, pp. 160-168, Jun. 2010. [5] P. Hogan, C. Maxwell, R. Kim, and T. Gaskins, "World Wind 3D Earth Viewing," Apr. 2007. [6] C. E. Viviano-Beck et al., "Revised CRISM spectral parameters and summary products based on the currently detected mineral diversity on Mars," J. Geophys. Res. E Planets, vol. 119, no. 6, pp. 1403-1431, Jun. 2014. [7] R. N. Clark et al., "USGS digital spectral library splib06a: U.S. Geological Survey, Digital Data Series 231," 2007. [Online]. Available: http://speclab.cr.usgs.gov/spectral.lib06.

  4. blend4php: a PHP API for galaxy.

    PubMed

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy's RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4phpDatabase URL: https://github.com/galaxyproject/blend4php. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Asian Pacific Islander dementia care network: a model of care for underserved communities.

    PubMed

    Kally, Zina; Cherry, Debra L; Howland, Susan; Villarruel, Monica

    2014-01-01

    This study presents the results of the work of the Asian Pacific Islander Dementia Care Network (APIDCN)--a collaborative model of care created to develop community capacity to deliver dementia capable services, build community awareness about Alzheimer's disease and other dementias, and offer direct services to caregivers in the API community in Los Angeles. Through trainings, mentoring, and outreach campaigns, the APIDCN expanded the availability of culturally competent services in the API community. The knowledge that was embedded within partner organizations and in the community at large assures sustainability of the services after the project ended.

  6. Mobile Timekeeping Application Built on Reverse-Engineered JPL Infrastructure

    NASA Technical Reports Server (NTRS)

    Witoff, Robert J.

    2013-01-01

    Every year, non-exempt employees cumulatively waste over one man-year tracking their time and using the timekeeping Web page to save those times. This app eliminates this waste. The innovation is a native iPhone app. Libraries were built around a reverse- engineered JPL API. It represents a punch-in/punch-out paradigm for timekeeping. It is accessible natively via iPhones, and features ease of access. Any non-exempt employee can natively punch in and out, as well as save and view their JPL timecard. This app is built on custom libraries created by reverse-engineering the standard timekeeping application. Communication is through custom libraries that re-route traffic through BrowserRAS (remote access service). This has value at any center where employees track their time.

  7. 47 CFR 61.43 - Annual price cap filings required.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... tariff year, that make appropriate adjustments to their PCI, API, and SBI values pursuant to §§ 61.45 through 61.47, and that incorporate new services into the PCI, API, or SBI calculations pursuant to §§ 61...

  8. 47 CFR 61.43 - Annual price cap filings required.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... tariff year, that make appropriate adjustments to their PCI, API, and SBI values pursuant to §§ 61.45 through 61.47, and that incorporate new services into the PCI, API, or SBI calculations pursuant to §§ 61...

  9. 47 CFR 61.43 - Annual price cap filings required.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... tariff year, that make appropriate adjustments to their PCI, API, and SBI values pursuant to §§ 61.45 through 61.47, and that incorporate new services into the PCI, API, or SBI calculations pursuant to §§ 61...

  10. 47 CFR 61.43 - Annual price cap filings required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... tariff year, that make appropriate adjustments to their PCI, API, and SBI values pursuant to §§ 61.45 through 61.47, and that incorporate new services into the PCI, API, or SBI calculations pursuant to §§ 61...

  11. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.

    PubMed

    Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W

    2014-01-01

    Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.

  12. Design of Deformation Monitoring System for Volcano Mitigation

    NASA Astrophysics Data System (ADS)

    Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal

    2016-08-01

    Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.

  13. Genome-wide analysis of signatures of selection in populations of African honey bees (Apis mellifera) using new web-based tools.

    PubMed

    Fuller, Zachary L; Niño, Elina L; Patch, Harland M; Bedoya-Reina, Oscar C; Baumgarten, Tracey; Muli, Elliud; Mumoki, Fiona; Ratan, Aakrosh; McGraw, John; Frazier, Maryann; Masiga, Daniel; Schuster, Stephen; Grozinger, Christina M; Miller, Webb

    2015-07-10

    With the development of inexpensive, high-throughput sequencing technologies, it has become feasible to examine questions related to population genetics and molecular evolution of non-model species in their ecological contexts on a genome-wide scale. Here, we employed a newly developed suite of integrated, web-based programs to examine population dynamics and signatures of selection across the genome using several well-established tests, including F ST, pN/pS, and McDonald-Kreitman. We applied these techniques to study populations of honey bees (Apis mellifera) in East Africa. In Kenya, there are several described A. mellifera subspecies, which are thought to be localized to distinct ecological regions. We performed whole genome sequencing of 11 worker honey bees from apiaries distributed throughout Kenya and identified 3.6 million putative single-nucleotide polymorphisms. The dense coverage allowed us to apply several computational procedures to study population structure and the evolutionary relationships among the populations, and to detect signs of adaptive evolution across the genome. While there is considerable gene flow among the sampled populations, there are clear distinctions between populations from the northern desert region and those from the temperate, savannah region. We identified several genes showing population genetic patterns consistent with positive selection within African bee populations, and between these populations and European A. mellifera or Asian Apis florea. These results lay the groundwork for future studies of adaptive ecological evolution in honey bees, and demonstrate the use of new, freely available web-based tools and workflows ( http://usegalaxy.org/r/kenyanbee ) that can be applied to any model system with genomic information.

  14. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. AWS-Glacier As A Storage Foundation For AWS-EC2 Hosted Scientific Data Services

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Potter, N.

    2016-12-01

    Using AWS Glacier as a base level data store for a scientific data service presents new challenges for the web accessible data services, along with their software clients and human operators. All meaningful Glacier transactions take at least 4 hours to complete. This is in contrast to the various web APIs for data such as WMS, WFS, WCS, DAP2, and Netcdf tools which were all written based on the premise that the response will be (nearly) immediate. Only DAP4 and WPS contain an explicit asynchronous component to their respective protocols which allows for "return later" behaviors. We were able to put Hyrax (a DAP4 server) in front of Glacier-held resources, but there were significant issues. Any kind of probing of the datasets happens at the cost of the Glacier retrieval period, 4 hours. A couple of crucial things fall out of this: The first is that the service must cache metadata, including coordinate map arrays, so that a client can have enough information available in the "immediate" time frame to make a decisions about what to ask for from the dataset. This type of request planning is important because a data access request will take 4 hours to complete unless the data resource has been cached. The second thing is that the clients need to change their behavior when accessing datasets in an asynchronous system, even if the metadata is cached. Commonly, client applications will request a number of data components from a DAP2 service in the course of "discovering" the dataset. This may not be a well-supported model of interaction with Glacier or any other high latency data store.

  16. User-driven Cloud Implementation of environmental models and data for all

    NASA Astrophysics Data System (ADS)

    Gurney, R. J.; Percy, B. J.; Elkhatib, Y.; Blair, G. S.

    2014-12-01

    Environmental data and models come from disparate sources over a variety of geographical and temporal scales with different resolutions and data standards, often including terabytes of data and model simulations. Unfortunately, these data and models tend to remain solely within the custody of the private and public organisations which create the data, and the scientists who build models and generate results. Although many models and datasets are theoretically available to others, the lack of ease of access tends to keep them out of reach of many. We have developed an intuitive web-based tool that utilises environmental models and datasets located in a cloud to produce results that are appropriate to the user. Storyboards showing the interfaces and visualisations have been created for each of several exemplars. A library of virtual machine images has been prepared to serve these exemplars. Each virtual machine image has been tailored to run computer models appropriate to the end user. Two approaches have been used; first as RESTful web services conforming to the Open Geospatial Consortium (OGC) Web Processing Service (WPS) interface standard using the Python-based PyWPS; second, a MySQL database interrogated using PHP code. In all cases, the web client sends the server an HTTP GET request to execute the process with a number of parameter values and, once execution terminates, an XML or JSON response is sent back and parsed at the client side to extract the results. All web services are stateless, i.e. application state is not maintained by the server, reducing its operational overheads and simplifying infrastructure management tasks such as load balancing and failure recovery. A hybrid cloud solution has been used with models and data sited on both private and public clouds. The storyboards have been transformed into intuitive web interfaces at the client side using HTML, CSS and JavaScript, utilising plug-ins such as jQuery and Flot (for graphics), and Google Maps APIs. We have demonstrated that a cloud infrastructure can be used to assemble a virtual research environment that, coupled with a user-driven development approach, is able to cater to the needs of a wide range of user groups, from domain experts to concerned members of the general public.

  17. The AppScale Cloud Platform

    PubMed Central

    Krintz, Chandra

    2013-01-01

    AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721

  18. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    NASA Astrophysics Data System (ADS)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  19. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.

  20. DGIdb 3.0: a redesign and expansion of the drug-gene interaction database.

    PubMed

    Cotto, Kelsy C; Wagner, Alex H; Feng, Yang-Yang; Kiwala, Susanna; Coffman, Adam C; Spies, Gregory; Wollam, Alex; Spies, Nicholas C; Griffith, Obi L; Griffith, Malachi

    2018-01-04

    The drug-gene interaction database (DGIdb, www.dgidb.org) consolidates, organizes and presents drug-gene interactions and gene druggability information from papers, databases and web resources. DGIdb normalizes content from 30 disparate sources and allows for user-friendly advanced browsing, searching and filtering for ease of access through an intuitive web user interface, application programming interface (API) and public cloud-based server image. DGIdb v3.0 represents a major update of the database. Nine of the previously included 24 sources were updated. Six new resources were added, bringing the total number of sources to 30. These updates and additions of sources have cumulatively resulted in 56 309 interaction claims. This has also substantially expanded the comprehensive catalogue of druggable genes and anti-neoplastic drug-gene interactions included in the DGIdb. Along with these content updates, v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times and upgrading the underlying web application framework. In addition, the expanded API features new endpoints which allow users to extract more detailed information about queried drugs, genes and drug-gene interactions, including listings of PubMed IDs, interaction type and other interaction metadata.

  1. The synergistic effects of almond protection fungicides on honey bee (Apis mellifera) forager survival

    USDA-ARS?s Scientific Manuscript database

    The honey bee (Apis mellifera) contributes approximately $17 billion annually in pollination services performed for major agricultural crops in the United States including almond, which is completely dependent on honey bee pollination for nut set. Almond growers face challenges to crop productivity ...

  2. Using Clouds for MapReduce Measurement Assignments

    ERIC Educational Resources Information Center

    Rabkin, Ariel; Reiss, Charles; Katz, Randy; Patterson, David

    2013-01-01

    We describe our experiences teaching MapReduce in a large undergraduate lecture course using public cloud services and the standard Hadoop API. Using the standard API, students directly experienced the quality of industrial big-data tools. Using the cloud, every student could carry out scalability benchmarking assignments on realistic hardware,…

  3. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    NASA Astrophysics Data System (ADS)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  4. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  5. Web Design for Space Operations: An Overview of the Challenges and New Technologies Used in Developing and Operating Web-Based Applications in Real-Time Operational Support Onboard the International Space Station, in Astronaut Mission Planning and Mission Control Operations

    NASA Technical Reports Server (NTRS)

    Khan, Ahmed

    2010-01-01

    The International Space Station (ISS) Operations Planning Team, Mission Control Centre and Mission Automation Support Network (MAS) have all evolved over the years to use commercial web-based technologies to create a configurable electronic infrastructure to manage the complex network of real-time planning, crew scheduling, resource and activity management as well as onboard document and procedure management required to co-ordinate ISS assembly, daily operations and mission support. While these Web technologies are classified as non-critical in nature, their use is part of an essential backbone of daily operations on the ISS and allows the crew to operate the ISS as a functioning science laboratory. The rapid evolution of the internet from 1998 (when ISS assembly began) to today, along with the nature of continuous manned operations in space, have presented a unique challenge in terms of software engineering and system development. In addition, the use of a wide array of competing internet technologies (including commercial technologies such as .NET and JAVA ) and the special requirements of having to support this network, both nationally among various control centres for International Partners (IPs), as well as onboard the station itself, have created special challenges for the MCC Web Tools Development Team, software engineers and flight controllers, who implement and maintain this system. This paper presents an overview of some of these operational challenges, and the evolving nature of the solutions and the future use of COTS based rich internet technologies in manned space flight operations. In particular this paper will focus on the use of Microsoft.s .NET API to develop Web-Based Operational tools, the use of XML based service oriented architectures (SOA) that needed to be customized to support Mission operations, the maintenance of a Microsoft IIS web server onboard the ISS, The OpsLan, functional-oriented Web Design with AJAX

  6. Interactive Learning Modules: Enabling Near Real-Time Oceanographic Data Use In Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Fundis, A. T.; Risien, C. M.

    2012-12-01

    The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.

  7. Exposing Coverage Data to the Semantic Web within the MELODIES project: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Riechert, Maik; Blower, Jon; Griffiths, Guy

    2016-04-01

    Coverage data, typically big in data volume, assigns values to a given set of spatiotemporal positions, together with metadata on how to interpret those values. Existing storage formats like netCDF, HDF and GeoTIFF all have various restrictions that prevent them from being preferred formats for use over the web, especially the semantic web. Factors that are relevant here are the processing complexity, the semantic richness of the metadata, and the ability to request partial information, such as a subset or just the appropriate metadata. Making coverage data available within web browsers opens the door to new ways for working with such data, including new types of visualization and on-the-fly processing. As part of the European project MELODIES (http://melodiesproject.eu) we look into the challenges of exposing such coverage data in an interoperable and web-friendly way, and propose solutions using a host of emerging technologies like JSON-LD, the DCAT and GeoDCAT-AP ontologies, the CoverageJSON format, and new approaches to REST APIs for coverage data. We developed the CoverageJSON format within the MELODIES project as an additional way to expose coverage data to the web, next to having simple rendered images available using standards like OGC's WMS. CoverageJSON partially incorporates JSON-LD but does not encode individual data values as semantic resources, making use of the technology in a practical manner. The development also focused on it being a potential output format for OGC WCS. We will demonstrate how existing netCDF data can be exposed as CoverageJSON resources on the web together with a REST API that allows users to explore the data and run operations such as spatiotemporal subsetting. We will show various use cases from the MELODIES project, including reclassification of a Land Cover dataset client-side within the browser with the ability for the user to influence the reclassification result by making use of the above technologies.

  8. The Auroral Planetary Imaging and Spectroscopy (APIS) service

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Prangé, R.; Henry, F.; Le Sidaner, P.

    2015-06-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service, accessible online, provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro-imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multi-spectral combined analysis.

  9. Electricity Data Browser

    EIA Publications

    The Electricity Data Browser shows generation, consumption, fossil fuel receipts, stockpiles, retail sales, and electricity prices. The data appear on an interactive web page and are updated each month. The Electricity Data Browser includes all the datasets collected and published in EIA's Electric Power Monthly and allows users to perform dynamic charting of data sets as well as map the data by state. The data browser includes a series of reports that appear in the Electric Power Monthly and allows readers to drill down to plant level statistics, where available. All images and datasets are available for download. Users can also link to the data series in EIA's Application Programming Interface (API). An API makes our data machine-readable and more accessible to users.

  10. Implementation of Web Processing Services (WPS) over IPSL Earth System Grid Federation (ESGF) node

    NASA Astrophysics Data System (ADS)

    Kadygrov, Nikolay; Denvil, Sebastien; Carenton, Nicolas; Levavasseur, Guillaume; Hempelmann, Nils; Ehbrecht, Carsten

    2016-04-01

    The Earth System Grid Federation (ESGF) is aimed to provide access to climate data for the international climate community. ESGF is a system of distributed and federated nodes that dynamically interact with each other. ESGF user may search and download climatic data, geographically distributed over the world, from one common web interface and through standardized API. With the continuous development of the climate models and the beginning of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), the amount of data available from ESGF will continuously increase during the next 5 years. IPSL holds a replication of the different global and regional climate models output, observations and reanalysis data (CMIP5, CORDEX, obs4MIPs, etc) that are available on the IPSL ESGF node. In order to let scientists perform analysis of the models without downloading vast amount of data the Web Processing Services (WPS) were installed at IPSL compute node. The work is part of the CONVERGENCE project founded by French National Research Agency (ANR). PyWPS implementation of the Web processing Service standard from Open Geospatial Consortium (OGC) in the framework of birdhouse software is used. The processes could be run by user remotely through web-based WPS client or by using command-line tool. All the calculations are performed on the server side close to the data. If the models/observations are not available at IPSL it will be downloaded and cached by WPS process from ESGF network using synda tool. The outputs of the WPS processes are available for download as plots, tar-archives or as NetCDF files. We present the architecture of WPS at IPSL along with the processes for evaluation of the model performance, on-site diagnostics and post-analysis processing of the models output, e.g.: - regriding/interpolation/aggregation - ocgis (OpenClimateGIS) based polygon subsetting of the data - average seasonal cycle, multimodel mean, multimodel mean bias - calculation of the climate indices with icclim library (CERFACS) - atmospheric modes of variability In order to evaluate performance of any new model, once it became available in ESGF, we implement WPS with several model diagnostics and performance metrics calculated using ESMValTool (Eyring et al., GMDD 2015). As a further step we are developing new WPS processes and core-functions to be implemented at ISPL ESGF compute node following the scientific community needs.

  11. Using Mobile App Development Tools to Build a GIS Application

    NASA Astrophysics Data System (ADS)

    Mital, A.; Catchen, M.; Mital, K.

    2014-12-01

    Our group designed and built working web, android, and IOS applications using different mapping libraries as bases on which to overlay fire data from NASA. The group originally planned to make app versions for Google Maps, Leaflet, and OpenLayers. However, because the Leaflet library did not properly load on Android, the group focused efforts on the other two mapping libraries. For Google Maps, the group first designed a UI for the web app and made a working version of the app. After updating the source of fire data to one which also provided historical fire data, the design had to be modified to include the extra data. After completing a working version of the web app, the group used webview in android, a built in resource which allowed porting the web app to android without rewriting the code for android. Upon completing this, the group found Apple IOS devices had a similar capability, and so decided to add an IOS app to the project using a function similar to webview. Alongside this effort, the group began implementing an OpenLayers fire map using a simpler UI. This web app was completed fairly quickly relative to Google Maps; however, it did not include functionality such as satellite imagery or searchable locations. The group finished the project with a working android version of the Google Maps based app supporting API levels 14-19 and an OpenLayers based app supporting API levels 8-19, as well as a Google Maps based IOS app supporting both old and new screen formats. This project was implemented by high school and college students under an SGT Inc. STEM internship program

  12. Access and Use of MMS Data through SPDF Services

    NASA Astrophysics Data System (ADS)

    McGuire, R. E.; Bilitza, D.; Boardsen, S. A.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Rao, U. R.; Roberts, D. A.; Yurow, R. E.

    2016-12-01

    In its role as a Heliophysics Active Final Archive and in close cooperation with the MMS project and its Science Data Center, the Space Physics Data Facility (SPDF) now serves a full set of public MMS data and QuickLook plots. All SPDF services for this data and all data are available via links from the SPDF home page (http://spdf.gsfc.nasa.gov). SPDF's CDAWeb features MMS Level-2 survey and burst mode data with graphics, listing and data superset/subset functions. These capabilities are available (1) through our html user interface, (2) through calls to our CDAS web services API, and (3) through other interfaces and libraries using the CDAS web services or that otherwise access our holdings including SPDF's Heliophysics Data Portal and several external systems. As context in use of the MMS data, CDAWeb also serves current data from many other current missions. These include the Van Allen Probes 1/2 and the five THEMIS/ARTEMIS spacecraft, as well as e.g. ACE, Cluster 1/2/3/4, DMSP 16/17/18, Geotail, GOES 13/14/15, NOAA/POES 15/16/18/19, MetOP POES 1/2, Stereo A/B, TWINS 1/2, Wind and >120 Ground-Based investigations). This full set of public MMS Level-2 science data and QuickLook plots, and all other public data held by SPDF, are also available for direct file download by HTTP or FTP links from the SPDF home page above. As a reminder, MMS Level-2 data are publicly available about 30 days after data is taken, and QuickLook survey plots are available about a day after data is taken). MMS orbits (current and predictive) are served through SPDF's SSCWeb service and our Java-based interactive 4D Orbit Viewer, also with orbits of many other current missions). Our presentation will discuss recent enhancements to CDAWeb and other services and our plans to support new MMS data products and upcoming heliophysics missions including ICON, GOLD and Solar Probe Plus.

  13. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  14. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2006-12-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Also, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross-spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  15. Learning the Semantics of Structured Data Sources

    ERIC Educational Resources Information Center

    Taheriyan, Mohsen

    2015-01-01

    Information sources such as relational databases, spreadsheets, XML, JSON, and Web APIs contain a tremendous amount of structured data, however, they rarely provide a semantic model to describe their contents. Semantic models of data sources capture the intended meaning of data sources by mapping them to the concepts and relationships defined by a…

  16. Using GeoRSS feeds to distribute house renting and selling information based on Google map

    NASA Astrophysics Data System (ADS)

    Nong, Yu; Wang, Kun; Miao, Lei; Chen, Fei

    2007-06-01

    Geographically Encoded Objects RSS (GeoRSS) is a way to encode location in RSS feeds. RSS is a widely supported format for syndication of news and weblogs, and is extendable to publish any sort of itemized data. When Weblogs explode since RSS became new portals, Geo-tagged feed is necessary to show the location that story tells. Geographically Encoded Objects adopts the core of RSS framework, making itself the map annotations specified in the RSS XML format. The case studied illuminates that GeoRSS could be maximally concise in representation and conception, so it's simple to manipulate generation and then mashup GeoRSS feeds with Google Map through API to show the real estate information with other attribute in the information window. After subscribe to feeds of concerned subjects, users could easily check for new bulletin showing on map through syndication. The primary design goal of GeoRSS is to make spatial data creation as easy as regular Web content development. However, it does more for successfully bridging the gap between traditional GIS professionals and amateurs, Web map hackers, and numerous services that enable location-based content for its simplicity and effectiveness.

  17. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    NASA Astrophysics Data System (ADS)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.

  18. Oceanids command and control (C2) data system - Marine autonomous systems data for vehicle piloting, scientific data users, operational data assimilation, and big data

    NASA Astrophysics Data System (ADS)

    Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.

    2017-12-01

    The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.

  19. Proteus - A Free and Open Source Sensor Observation Service (SOS) Client

    NASA Astrophysics Data System (ADS)

    Henriksson, J.; Satapathy, G.; Bermudez, L. E.

    2013-12-01

    The Earth's 'electronic skin' is becoming ever more sophisticated with a growing number of sensors measuring everything from seawater salinity levels to atmospheric pressure. To further the scientific application of this data collection effort, it is important to make the data easily available to anyone who wants to use it. Making Earth Science data readily available will allow the data to be used in new and potentially groundbreaking ways. The US National Science and Technology Council made this clear in its most recent National Strategy for Civil Earth Observations report, when it remarked that Earth observations 'are often found to be useful for additional purposes not foreseen during the development of the observation system'. On the road to this goal the Open Geospatial Consortium (OGC) is defining uniform data formats and service interfaces to facilitate the discovery and access of sensor data. This is being done through the Sensor Web Enablement (SWE) stack of standards, which include the Sensor Observation Service (SOS), Sensor Model Language (SensorML), Observations & Measurements (O&M) and Catalog Service for the Web (CSW). End-users do not have to use these standards directly, but can use smart tools that leverage and implement them. We have developed such a tool named Proteus. Proteus is an open-source sensor data discovery client. The goal of Proteus is to be a general-purpose client that can be used by anyone for discovering and accessing sensor data via OGC-based services. Proteus is a desktop client and supports a straightforward workflow for finding sensor data. The workflow takes the user through the process of selecting appropriate services, bounding boxes, observed properties, time periods and other search facets. NASA World Wind is used to display the matching sensor offerings on a map. Data from any sensor offering can be previewed in a time series. The user can download data from a single sensor offering, or download data in bulk from all matching sensor offerings. Proteus leverages NASA World Wind's WMS capabilities and allow overlaying sensor offerings on top of any map. Specific search criteria (i.e. user discoveries) can be saved and later restored. Proteus is supports two user types: 1) the researcher/scientist interested in discovering and downloading specific sensor data as input to research processes, and 2) the data manager responsible for maintaining sensor data services (e.g. SOSs) and wants to ensure proper data and metadata delivery, verify sensor data, and receive sensor data alerts. Proteus has a Web-based companion product named the Community Hub that is used to generate sensor data alerts. Alerts can be received via an RSS feed, viewed in a Web browser or displayed directly in Proteus via a Web-based API. To advance the vision of making Earth Science data easily discoverable and accessible to end-users, professional or laymen, Proteus is available as open-source on GitHub (https://github.com/intelligentautomation/proteus).

  20. Defining Data Access Pathways for Atmosphere to Electrons Wind Energy Data

    NASA Astrophysics Data System (ADS)

    Macduff, M.; Sivaraman, C.

    2016-12-01

    Atmosphere to Electrons (A2e), is a U.S. Department of Energy (DOE) Wind Program research initiative designed to optimize the performance of wind power plants by lowering the levelized cost of energy (LCOE). The Data Archive and Portal (DAP), managed by PNNL and hosted on Amazon Web Services, is a key capability of the A2e initiative. The DAP is used to collect, store, catalog, preserve and disseminate results from the experimental and computational studies representing a diverse user community requiring both open and proprietary data archival solutions(http://a2e.pnnl.gov). To enable consumer access to the data in DAP it is being built on a set of API's that are publically accessible. This includes persistent references for key meta-data objects as well as authenticated access to the data itself. The goal is to make the DAP catalog visible through a variety of data access paths bringing the data and metadata closer to the consumer. By providing persistent metadata records we hope to be able to build services that capture consumer utility and make referencing datasets easier.

  1. The User Community and a Multi-Mission Data Project: Services, Experiences and Directions of the Space Physics Data Facility

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.; Bilitza, D.; Candey, R.; Chimiak, R.; Cooper, John; Fung, Shing; Harris, B.; Johnson R.; King, J.; Kovalick, T.; hide

    2008-01-01

    From a user's perspective, the multi-mission data and orbit services of NASA's Space Physics Data Facility (SPDF) project offer a unique range of important data and services highly complementary to other services presently available or now evolving in the international heliophysics data environment. The VSP (Virtual Space Physics Observatory) service is an active portal to a wide range of distributed data sources. CDAWeb (Coordinate Data Analysis Web) enables plots, listings and file downloads for current data cross the boundaries of missions and instrument types (and now including data from THEMIS and STEREO). SSCWeb, Helioweb and our 3D Animated Orbit Viewer (TIPSOD) provide position data and query logic for most missions currently important to heliophysics science. OMNIWeb with its new extension to 1- and 5-minute resolution provides interplanetary parameters at the Earth's bow shock as a unique value-added data product. SPDF also maintains NASA's CDF (common Data Format) standard and a range of associated tools including translation services. These capabilities are all now available through webservices-based APIs as well as through our direct user interfaces. In this paper, we will demonstrate the latest data and capabilities now supported in these multi-mission services, review the lessons we continue to learn in what science users need and value in this class of services, and discuss out current thinking to the future role and appropriate focus of the SPDF effort in the evolving and increasingly distributed heliophysics data environment.

  2. Dreams deferred: Contextualizing the health and psychosocial needs of undocumented Asian and Pacific Islander young adults in Northern California.

    PubMed

    Sudhinaraset, May; Ling, Irving; To, Tu My; Melo, Jason; Quach, Thu

    2017-07-01

    There are currently 1.5 million undocumented Asians and Pacific Islanders (APIs) in the US. Undocumented API young adults, in particular, come of age in a challenging political and social climate, but little is known about their health outcomes. To our knowledge, this is the first study to assess the psychosocial needs and health status of API undocumented young adults. Guided by social capital theory, this qualitative study describes the social context of API undocumented young adults (ages 18-31), including community and government perceptions, and how social relationships influence health. This study was conducted in Northern California and included four focus group discussions (FGDs) and 24 in-depth interviews (IDIs), with 32 unique participants total. FGDs used purposeful sampling by gender (two male and two female discussions) and education status (in school and out-of-school). Findings suggest low bonding and bridging social capital. Results indicate that community distrust is high, even within the API community, due to high levels of exploitation, discrimination, and threats of deportation. Participants described how documentation status is a barrier in accessing health services, particularly mental health and sexual and reproductive health services. This study identifies trusted community groups and discusses recommendations for future research, programs, and policies. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Home Energy Management System - VOLTTRON Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zandi, Helia

    In most Home Energy Management Systems (HEMS) available in the market, different devices running different communication protocols cannot interact with each other and exchange information. As a result of this integration, the information about different devices running different communication protocol can be accessible by other agents and devices running on VOLTTRON platform. The integration process can be used by any HEMS available in the market regardless of the programming language they use. If the existing HEMS provides an Application Programming Interface (API) based on the RESTFul architecture, that API can be used for integration. Our candidate HEMS in this projectmore » is home-assistant (Hass). An agent is implemented which can communicate with the Hass API and receives information about the devices loaded on the API. The agent publishes the information it receives on the VOLTTRON message bus so other agents can have access to this information. On the other side, for each type of devices, an agent is implemented such as Climate Agent, Lock Agent, Switch Agent, Light Agent, etc. Each of these agents is subscribed to the messages published on the message bus about their associated devices. These agents can also change the status of the devices by sending appropriate service calls to the API. Other agents and services on the platform can also access this information and coordinate their decision-making process based on this information.« less

  4. Agile Data Management with the Global Change Information System

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Aulenbach, S.; Tilmes, C.; Goldstein, J.

    2013-12-01

    We describe experiences applying agile software development techniques to the realm of data management during the development of the Global Change Information System (GCIS), a web service and API for authoritative global change information under development by the US Global Change Research Program. Some of the challenges during system design and implementation have been : (1) balancing the need for a rigorous mechanism for ensuring information quality with the realities of large data sets whose contents are often in flux, (2) utilizing existing data to inform decisions about the scope and nature of new data, and (3) continuously incorporating new knowledge and concepts into a relational data model. The workflow for managing the content of the system has much in common with the development of the system itself. We examine various aspects of agile software development and discuss whether or how we have been able to use them for data curation as well as software development.

  5. ARDOLORES: an Arduino based motors control system for DOLORES

    NASA Astrophysics Data System (ADS)

    Gonzalez, Manuel; Ventura, H.; San Juan, J.; Di Fabrizio, L.

    2014-07-01

    We present ARDOLORES a custom made motor control system for the DOLORES instrument in use at the TNG telescope. ARDOLORES replaced the original PMAC based motor control system at a fraction of the cost. The whole system is composed by one master Arduino ONE with its Ethernet shield, to handle the communications with the external world through an Ethernet socket, and by one Arduino ONE with its custom motor shield for each axis to be controlled. The communication between the master and slaves Arduinos is made possible through the I2C bus. Also a Java web-service has been written to control the motors from an higher level and provides an external API for the scientific GUI. The system has been working since January 2012 handling the DOLORES motors and has demonstrated to be stable, reliable, and with easy maintenance in both the hardware and the software parts.

  6. HammerCloud: A Stress Testing System for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo

    2011-12-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  7. HIV testing among sexually experienced Asian and Pacific Islander young women association with routine gynecologic care.

    PubMed

    Hahm, Hyeouk Chris; Song, In Han; Ozonoff, Al; Sassani, Jessica C

    2009-01-01

    To describe the proportion of HIV testing in the past 12 months among sexually experienced Asian and Pacific Islander (API) women and to investigate to what extent routine gynecologic care (RGC) increases HIV testing among API women. Data were derived from Wave III of the National Longitudinal Study of Adolescent Health (Add Health). Analyses were limited to 7,576 sexually experienced women (White, n = 4,482 [68.5%]; Black, n = 1,693 [25.6%]; Hispanic, n = 923 [13.9%]; API, n = 478 [7.2%]) aged 18-27 years. Multiple logistic regression analyses were used to estimate the association between RGC and HIV testing after controlling for predisposing, need, and enabling factors. On average, 22.8% (n = 1,504) of sexually experienced women reported HIV testing in the past year. API women had the lowest proportion of testing (17.2%), and Black women had the highest (26.2%). Overall, 60.2% of API women reported receiving RGC; however, only 15.5% of API who received RGC reported HIV testing. After controlling for covariates, significantly positive associations were found for White, Black, and Hispanic women between RGC and HIV testing; however, there was no evidence that RGC was associated with HIV testing among API women. Our data suggest that RGC does not [corrected] increase HIV testing among API women. To eliminate disparities in HIV testing service utilization among API women, appropriate efforts should be directed to better understand the barriers and facilitators of HIV testing among this population.

  8. Using Social Media and Mobile Devices to Discover and Share Disaster Data Products Derived From Satellites

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Patrice; Frye, Stuart; Evans, John; Moe, Karen

    2014-01-01

    Data products derived from Earth observing satellites are difficult to find and share without specialized software and often times a highly paid and specialized staff. For our research effort, we endeavored to prototype a distributed architecture that depends on a standardized communication protocol and applications program interface (API) that makes it easy for anyone to discover and access disaster related data. Providers can easily supply the public with their disaster related products by building an adapter for our API. Users can use the API to browse and find products that relate to the disaster at hand, without a centralized catalogue, for example floods, and then are able to share that data via social media. Furthermore, a longerterm goal for this architecture is to enable other users who see the shared disaster product to be able to generate the same product for other areas of interest via simple point and click actions on the API on their mobile device. Furthermore, the user will be able to edit the data with on the ground local observations and return the updated information to the original repository of this information if configured for this function. This architecture leverages SensorWeb functionality [1] presented at previous IGARSS conferences. The architecture is divided into two pieces, the frontend, which is the GeoSocial API, and the backend, which is a standardized disaster node that knows how to talk to other disaster nodes, and also can communicate with the GeoSocial API. The GeoSocial API, along with the disaster node basic functionality enables crowdsourcing and thus can leverage insitu observations by people external to a group to perform tasks such as improving water reference maps, which are maps of existing water before floods. This can lower the cost of generating precision water maps. Keywords-Data Discovery, Disaster Decision Support, Disaster Management, Interoperability, CEOS WGISS Disaster Architecture

  9. Develop 3G Application with The J2ME SATSA API

    NASA Astrophysics Data System (ADS)

    JunWu, Xu; JunLing, Liang

    This paper describes research in the use of the Security and Trust Services API for J2ME (SATSA) to develop mobile applications. for 3G networks. SATSA defines a set of APIs that allows J2ME applications to communicate with and access functionality, secure storage and cryptographic operations provided by security elements such as smart cards and Wireless Identification Modules (WIM). A Java Card application could also work as an authentication module in a J2ME-based e-bank application. The e-bank application would allow its users to access their bank accounts using their cell phones.

  10. Outside the Framework of Thinkable Thought: The Modern Orchestration Project

    ERIC Educational Resources Information Center

    Gattegno, Eliot Aron

    2010-01-01

    In today's world of too much information, context--not content--is king. This proposal is for the development of an unparalleled sonic analysis tool that converts audio files into musical score notation and a Web site (API) to collect manage and preserve information about the musical sounds analyzed, as well as music scores, videos, and articles…

  11. HIV and AIDS in Suburban Asian and Pacific Islander Communities: Factors Influencing Self-Efficacy in HIV Risk Reduction

    ERIC Educational Resources Information Center

    Takahashi, Lois M.; Magalong, Michelle G.; DeBell, Paula; Fasudhani, Angela

    2006-01-01

    Though AIDS case rates among Asian Pacific Islander Americans (APIs) in the United States remain relatively low, the number has been steadily increasing. Scholars, policy makers, and service providers still know little about how confident APIs are in carrying out different HIV risk reduction strategies. This article addresses this gap by…

  12. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  13. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  14. The transcription factor encyclopedia.

    PubMed

    Yusuf, Dimas; Butland, Stefanie L; Swanson, Magdalena I; Bolotin, Eugene; Ticoll, Amy; Cheung, Warren A; Zhang, Xiao Yu Cindy; Dickman, Christopher T D; Fulton, Debra L; Lim, Jonathan S; Schnabl, Jake M; Ramos, Oscar H P; Vasseur-Cognet, Mireille; de Leeuw, Charles N; Simpson, Elizabeth M; Ryffel, Gerhart U; Lam, Eric W-F; Kist, Ralf; Wilson, Miranda S C; Marco-Ferreres, Raquel; Brosens, Jan J; Beccari, Leonardo L; Bovolenta, Paola; Benayoun, Bérénice A; Monteiro, Lara J; Schwenen, Helma D C; Grontved, Lars; Wederell, Elizabeth; Mandrup, Susanne; Veitia, Reiner A; Chakravarthy, Harini; Hoodless, Pamela A; Mancarelli, M Michela; Torbett, Bruce E; Banham, Alison H; Reddy, Sekhar P; Cullum, Rebecca L; Liedtke, Michaela; Tschan, Mario P; Vaz, Michelle; Rizzino, Angie; Zannini, Mariastella; Frietze, Seth; Farnham, Peggy J; Eijkelenboom, Astrid; Brown, Philip J; Laperrière, David; Leprince, Dominique; de Cristofaro, Tiziana; Prince, Kelly L; Putker, Marrit; del Peso, Luis; Camenisch, Gieri; Wenger, Roland H; Mikula, Michal; Rozendaal, Marieke; Mader, Sylvie; Ostrowski, Jerzy; Rhodes, Simon J; Van Rechem, Capucine; Boulay, Gaylor; Olechnowicz, Sam W Z; Breslin, Mary B; Lan, Michael S; Nanan, Kyster K; Wegner, Michael; Hou, Juan; Mullen, Rachel D; Colvin, Stephanie C; Noy, Peter John; Webb, Carol F; Witek, Matthew E; Ferrell, Scott; Daniel, Juliet M; Park, Jason; Waldman, Scott A; Peet, Daniel J; Taggart, Michael; Jayaraman, Padma-Sheela; Karrich, Julien J; Blom, Bianca; Vesuna, Farhad; O'Geen, Henriette; Sun, Yunfu; Gronostajski, Richard M; Woodcroft, Mark W; Hough, Margaret R; Chen, Edwin; Europe-Finner, G Nicholas; Karolczak-Bayatti, Magdalena; Bailey, Jarrod; Hankinson, Oliver; Raman, Venu; LeBrun, David P; Biswal, Shyam; Harvey, Christopher J; DeBruyne, Jason P; Hogenesch, John B; Hevner, Robert F; Héligon, Christophe; Luo, Xin M; Blank, Marissa Cathleen; Millen, Kathleen Joyce; Sharlin, David S; Forrest, Douglas; Dahlman-Wright, Karin; Zhao, Chunyan; Mishima, Yuriko; Sinha, Satrajit; Chakrabarti, Rumela; Portales-Casamar, Elodie; Sladek, Frances M; Bradley, Philip H; Wasserman, Wyeth W

    2012-01-01

    Here we present the Transcription Factor Encyclopedia (TFe), a new web-based compendium of mini review articles on transcription factors (TFs) that is founded on the principles of open access and collaboration. Our consortium of over 100 researchers has collectively contributed over 130 mini review articles on pertinent human, mouse and rat TFs. Notable features of the TFe website include a high-quality PDF generator and web API for programmatic data retrieval. TFe aims to rapidly educate scientists about the TFs they encounter through the delivery of succinct summaries written and vetted by experts in the field. TFe is available at http://www.cisreg.ca/tfe.

  15. 3Dmol.js: molecular visualization with WebGL.

    PubMed

    Rego, Nicholas; Koes, David

    2015-04-15

    3Dmol.js is a modern, object-oriented JavaScript library that uses the latest web technologies to provide interactive, hardware-accelerated three-dimensional representations of molecular data without the need to install browser plugins or Java. 3Dmol.js provides a full featured API for developers as well as a straightforward declarative interface that lets users easily share and embed molecular data in websites. 3Dmol.js is distributed under the permissive BSD open source license. Source code and documentation can be found at http://3Dmol.csb.pitt.edu dkoes@pitt.edu. © The Author 2014. Published by Oxford University Press.

  16. The tissue micro-array data exchange specification: a web based experience browsing imported data

    PubMed Central

    Nohle, David G; Hackman, Barbara A; Ayers, Leona W

    2005-01-01

    Background The AIDS and Cancer Specimen Resource (ACSR) is an HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers to approved researchers HIV infected biologic samples and uninfected control tissues including tissue cores in micro-arrays (TMA) accompanied by de-identified clinical data. Researchers interested in the type and quality of TMA tissue cores and the associated clinical data need an efficient method for viewing available TMA materials. Because each of the tissue samples within a TMA has separate data including a core tissue digital image and clinical data, an organized, standard approach to producing, navigating and publishing such data is necessary. The Association for Pathology Informatics (API) extensible mark-up language (XML) TMA data exchange specification (TMA DES) proposed in April 2003 provides a common format for TMA data. Exporting TMA data into the proposed format offers an opportunity to implement the API TMA DES. Using our public BrowseTMA tool, we created a web site that organizes and cross references TMA lists, digital "virtual slide" images, TMA DES export data, linked legends and clinical details for researchers. Microsoft Excel® and Microsoft Word® are used to convert tabular clinical data and produce an XML file in the TMA DES format. The BrowseTMA tool contains Extensible Stylesheet Language Transformation (XSLT) scripts that convert XML data into Hyper-Text Mark-up Language (HTML) web pages with hyperlinks automatically added to allow rapid navigation. Results Block lists, virtual slide images, legends, clinical details and exports have been placed on the ACSR web site for 14 blocks with 1623 cores of 2.0, 1.0 and 0.6 mm sizes. Our virtual microscope can be used to view and annotate these TMA images. Researchers can readily navigate from TMA block lists to TMA legends and to clinical details for a selected tissue core. Exports for 11 blocks with 3812 cores from three other institutions were processed with the BrowseTMA tool. Fifty common data elements (CDE) from the TMA DES were used and 42 more created for site-specific data. Researchers can download TMA clinical data in the TMA DES format. Conclusion Virtual TMAs with clinical data can be viewed on the Internet by interested researchers using the BrowseTMA tool. We have organized our approach to producing, sorting, navigating and publishing TMA information to facilitate such review. We have converted Excel TMA data into TMA DES XML, and imported it and TMA DES XML from another institution into BrowseTMA to produce web pages that allow us to browse through the merged data. We proposed enhancements to the TMA DES as a result of this experience. We implemented improvements to the API TMA DES as a result of using exported data from several institutions. A document type definition was written for the API TMA DES (that optionally includes proposed enhancements). Independent validators can be used to check exports against the DTD (with or without the proposed enhancements). Linking tissue core images to readily navigable clinical data greatly improves the value of the TMA. PMID:16086837

  17. EIDA Next Generation: ongoing and future developments

    NASA Astrophysics Data System (ADS)

    Strollo, Angelo; Quinteros, Javier; Sleeman, Reinoud; Trani, Luca; Clinton, John; Stammler, Klaus; Danecek, Peter; Pedersen, Helle; Ionescu, Constantin

    2015-04-01

    The European Integrated Data Archive (EIDA; http://www.orfeus-eu.org/eida/eida.html) is the distributed Data Centre system within ORFEUS, providing transparent access and services to high quality, seismic data across (currently) 9 large data archives in Europe. EIDA is growing, in terms of the number of participating data centres, the size of the archives, the variability of the data in the archives, the number of users, and the volume of downloads. The on-going success of EIDA is thus providing challenges that are the driving force behind the design of the next generation (NG) of EIDA, which is expected to be implemented within EPOS IP. EIDA ORFEUS must cope with further expansion of the system and more complex user requirements by developing new techniques and extended services. The EIDA NG is being designed to work on standard FDSN web services and two additional new web services: Routing Service and QC (quality controlled) service. This presentation highlights the challenges EIDA needs to address during the EPOS IP and focuses on these 2 new services. The Routing Service can be considered as the core of EIDA NG. It was designed to assist users and clients to locate data within a federated, decentralized data centre (e.g. EIDA). A detailed, FDSN-compliant specification of the service has been developed. Our implementation of this service will run at every EIDA node, but is also capable of running on a user's computer, allowing anyone to define virtual or integrate existing data centres. This (meta)service needs to be queried in order to locate the data. Some smart clients (in a beta status) have been also provided to offer the user an integrated view of the whole EIDA, hiding the complexity of its internal structure. The service is open and able to be queried by anyone without the need of credentials or authentication. The QC Service is developed to cope with user requirements to query for relevant data only. The web service provides detailed information on the contents of the waveform data in an archive and in particular the following features and quality parameters are provided: gaps, statistical values, availability, overlaps, quality flags and more. It is a tool to be used for quickly exploring the contents of the waveform files before downloading them, or by clients to fulfill user specific requirements. The API reflects almost identically the FDSN dataselect service with some additional features. The characteristics are computed on fixed daily intervals (day boundaries) and in case of gaps the service can additionally provide the above features for each continuous data segment in the day interval. The newly developed services and the mediator service being designed and implemented in the near future, will facilitate interoperability and sustainability of the EIDA system and ensure a smooth integration with other Thematic (TCS) and Integrated (ICS) Core Services within EPOS.

  18. A Web Portal-Based Time-Aware KML Animation Tool for Exploring Spatiotemporal Dynamics of Hydrological Events

    NASA Astrophysics Data System (ADS)

    Bao, X.; Cai, X.; Liu, Y.

    2009-12-01

    Understanding spatiotemporal dynamics of hydrological events such as storms and droughts is highly valuable for decision making on disaster mitigation and recovery. Virtual Globe-based technologies such as Google Earth and Open Geospatial Consortium KML standards show great promises for collaborative exploration of such events using visual analytical approaches. However, currently there are two barriers for wider usage of such approaches. First, there lacks an easy way to use open source tools to convert legacy or existing data formats such as shapefiles, geotiff, or web services-based data sources to KML and to produce time-aware KML files. Second, an integrated web portal-based time-aware animation tool is currently not available. Thus users usually share their files in the portal but have no means to visually explore them without leaving the portal environment which the users are familiar with. We develop a web portal-based time-aware KML animation tool for viewing extreme hydrologic events. The tool is based on Google Earth JavaScript API and Java Portlet standard 2.0 JSR-286, and it is currently deployable in one of the most popular open source portal frameworks, namely Liferay. We have also developed an open source toolkit kml-soc-ncsa (http://code.google.com/p/kml-soc-ncsa/) to facilitate the conversion of multiple formats into KML and the creation of time-aware KML files. We illustrate our tool using some example cases, in which drought and storm events with both time and space dimension can be explored in this web-based KML animation portlet. The tool provides an easy-to-use web browser-based portal environment for multiple users to collaboratively share and explore their time-aware KML files as well as improving the understanding of the spatiotemporal dynamics of the hydrological events.

  19. The Arctic Research Mapping Application (ARMAP): a Geoportal for Visualizing Project-level Information About U.S. Funded Research in the Arctic

    NASA Astrophysics Data System (ADS)

    Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.

    2016-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.

  20. 76 FR 77881 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ... login fees. The text of the proposed rule change is available on the Exchange's Web site ( http://www... Schedule of Fees regarding the Exchange's API or login fees. ISE currently charges its Members a fee for each login that a Member utilizes for quoting or order entry, with a lesser charge for logins used for...

  1. 76 FR 20752 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-13

    ... login fees. The text of the proposed rule change is available on the Exchange's Web site ( http://www... Schedule of Fees regarding the Exchange's API or login fees. ISE currently charges its members a fee for each login that a Member utilizes for quoting or order entry, with a lesser charge for logins used for...

  2. A Networked Learning Model for Construction of Personal Learning Environments in Seventh Grade Life Science

    ERIC Educational Resources Information Center

    Drexler, Wendy

    2010-01-01

    The purpose of this design-based research case study was to apply a networked learning approach to a seventh grade science class at a public school in the southeastern United States. Students adapted Web applications to construct personal learning environments for in-depth scientific inquiry of poisonous and venomous life forms. API widgets were…

  3. Twitter data analysis: temporal and term frequency analysis with real-time event

    NASA Astrophysics Data System (ADS)

    Yadav, Garima; Joshi, Mansi; Sasikala, R.

    2017-11-01

    From the past few years, World Wide Web (www) has become a prominent and huge source for user generated content and opinionative data. Among various social media, Twitter gained popularity as it offers a fast and effective way of sharing users’ perspective towards various critical and other issues in different domain. As the data is hugely generated on cloud, it has opened doors for the researchers in the field of data science and analysis. There are various domains such as ‘Political’ domain, ‘Entertainment’ domain and ‘Business’ domain. Also there are various APIs that Twitter provides for developers 1) Search API, focus on the old tweets 2) Rest API, focuses on user details and allow to collect the user profile, friends and followers 3) Streaming API, which collects details like tweets, hashtags, geo locations. In our work we are accessing Streaming API in order to fetch real-time tweets for the dynamic happening event. For this we are focusing on ‘Entertainment’ domain especially ‘Sports’ as IPL-T20 is currently the trending on-going event. We are collecting these numerous amounts of tweets and storing them in MongoDB database where the tweets are stored in JSON document format. On this document we are performing time-series analysis and term frequency analysis using different techniques such as filtering, information extraction for text-mining that fulfils our objective of finding interesting moments for temporal data in the event and finding the ranking among the players or the teams based on popularity which helps people in understanding key influencers on the social media platform.

  4. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  5. STEREO in-situ data analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Luhmann, J.; Davis, A.; Russell, C.

    STEREO s IMPACT In-situ Measurements of Particles and CME Transients investigation provides the first opportunity for long duration detailed observations of 1 AU magnetic field structures plasma and suprathermal electrons and energetic particles at points bracketing Earth s heliospheric location The PLASTIC instrument will make plasma ion composition measurements completing STEREO s comprehensive in-situ perspective Stereoscopic 3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections ICME and solar wind structures to CMEs and coronal holes observed at the Sun The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission s full scientific potential An interactive browser with the ability to create publication-quality plots is being developed which will integrate STEREO s in-situ data with data from a variety of other missions including WIND and ACE Also an application program interface API will be provided allowing users to create custom software that ties directly into STEREO s data set The API will allow for more advanced forms of data mining than currently available through most data web services A variety of data access techniques and the development of cross-spacecraft data analysis tools will allow the larger scientific community to combine STEREO s unique in-situ data with those of other missions particularly the L1 missions and therefore to maximize

  6. Wormhole: A Powerful Data Mashup

    NASA Technical Reports Server (NTRS)

    Widen, David

    2011-01-01

    The mobile platform is quickly becoming the standard way that users interact with online resources. The iOS operating system allows iPhone and iPad users to seamlessly access highly interactive web applications that until recently were only available via a desktop or laptop. Wormhole is an AJAX application implemented as a smart web widget that allows users to easily supplement web pages with data directly from the Instrument Operations Subsystems division (IOS) at JPL. It creates an interactive mashup using a website's core content enhanced by dynamically retrieved image and metadata supplied by IOS using the webification API. Currently, this technology is limited in scope to NASA data; however, it can easily be augmented to serve many other needs. This web widget can be delivered in various ways, including as a bookmarklet. The underlying technology that powers Wormhole also has applications to other divisions while they are running current missions.

  7. Pathview Web: user friendly pathway visualization and data integration

    PubMed Central

    Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory

    2017-01-01

    Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075

  8. MyGeoHub: A Collaborative Geospatial Research and Education Platform

    NASA Astrophysics Data System (ADS)

    Kalyanam, R.; Zhao, L.; Biehl, L. L.; Song, C. X.; Merwade, V.; Villoria, N.

    2017-12-01

    Scientific research is increasingly collaborative and globally distributed; research groups now rely on web-based scientific tools and data management systems to simplify their day-to-day collaborative workflows. However, such tools often lack seamless interfaces, requiring researchers to contend with manual data transfers, annotation and sharing. MyGeoHub is a web platform that supports out-of-the-box, seamless workflows involving data ingestion, metadata extraction, analysis, sharing and publication. MyGeoHub is built on the HUBzero cyberinfrastructure platform and adds general-purpose software building blocks (GABBs), for geospatial data management, visualization and analysis. A data management building block iData, processes geospatial files, extracting metadata for keyword and map-based search while enabling quick previews. iData is pervasive, allowing access through a web interface, scientific tools on MyGeoHub or even mobile field devices via a data service API. GABBs includes a Python map library as well as map widgets that in a few lines of code, generate complete geospatial visualization web interfaces for scientific tools. GABBs also includes powerful tools that can be used with no programming effort. The GeoBuilder tool provides an intuitive wizard for importing multi-variable, geo-located time series data (typical of sensor readings, GPS trackers) to build visualizations supporting data filtering and plotting. MyGeoHub has been used in tutorials at scientific conferences and educational activities for K-12 students. MyGeoHub is also constantly evolving; the recent addition of Jupyter and R Shiny notebook environments enable reproducible, richly interactive geospatial analyses and applications ranging from simple pre-processing to published tools. MyGeoHub is not a monolithic geospatial science gateway, instead it supports diverse needs ranging from just a feature-rich data management system, to complex scientific tools and workflows.

  9. MMTF-An efficient file format for the transmission, visualization, and analysis of macromolecular structures.

    PubMed

    Bradley, Anthony R; Rose, Alexander S; Pavelka, Antonín; Valasatava, Yana; Duarte, Jose M; Prlić, Andreas; Rose, Peter W

    2017-06-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.

  10. MMTF—An efficient file format for the transmission, visualization, and analysis of macromolecular structures

    PubMed Central

    Pavelka, Antonín; Valasatava, Yana; Prlić, Andreas

    2017-01-01

    Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis. PMID:28574982

  11. Armenian Virtual Observatory: Services and Data Sharing

    NASA Astrophysics Data System (ADS)

    Knyazyan, A. V.; Astsatryan, H. V.; Mickaelian, A. M.

    2016-06-01

    The main aim of this article is to introduce the data management and services of the Armenian Virtual Observatory (ArVO), which consists of user friendly data management mechanisms, a new and productive cross-correlation service, and data sharing API based on international standards and protocols.

  12. Targeted Expansion Project for Outreach and Treatment for Substance Abuse and HIV Risk Behaviors in Asian and Pacific Islander Communities

    ERIC Educational Resources Information Center

    Nemoto, Tooru; Iwamoto, Mariko; Kamitani, Emiko; Morris, Anne; Sakata, Maria

    2011-01-01

    Access to culturally competent HIV/AIDS and substance abuse treatment and prevention services is limited for Asian and Pacific Islanders (APIs). Based on the intake data for a community outreach project in the San Francisco Bay Area (N = 1,349), HIV risk behaviors were described among the targeted API risk groups. The self-reported HIV prevalence…

  13. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations.

    PubMed

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael; Elemento, Olivier

    2017-05-01

    This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu ), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB's interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  14. The cancer precision medicine knowledge base for structured clinical-grade mutations and interpretations

    PubMed Central

    Huang, Linda; Fernandes, Helen; Zia, Hamid; Tavassoli, Peyman; Rennert, Hanna; Pisapia, David; Imielinski, Marcin; Sboner, Andrea; Rubin, Mark A; Kluk, Michael

    2017-01-01

    Objective: This paper describes the Precision Medicine Knowledge Base (PMKB; https://pmkb.weill.cornell.edu), an interactive online application for collaborative editing, maintenance, and sharing of structured clinical-grade cancer mutation interpretations. Materials and Methods: PMKB was built using the Ruby on Rails Web application framework. Leveraging existing standards such as the Human Genome Variation Society variant description format, we implemented a data model that links variants to tumor-specific and tissue-specific interpretations. Key features of PMKB include support for all major variant types, standardized authentication, distinct user roles including high-level approvers, and detailed activity history. A REpresentational State Transfer (REST) application-programming interface (API) was implemented to query the PMKB programmatically. Results: At the time of writing, PMKB contains 457 variant descriptions with 281 clinical-grade interpretations. The EGFR, BRAF, KRAS, and KIT genes are associated with the largest numbers of interpretable variants. PMKB’s interpretations have been used in over 1500 AmpliSeq tests and 750 whole-exome sequencing tests. The interpretations are accessed either directly via the Web interface or programmatically via the existing API. Discussion: An accurate and up-to-date knowledge base of genomic alterations of clinical significance is critical to the success of precision medicine programs. The open-access, programmatically accessible PMKB represents an important attempt at creating such a resource in the field of oncology. Conclusion: The PMKB was designed to help collect and maintain clinical-grade mutation interpretations and facilitate reporting for clinical cancer genomic testing. The PMKB was also designed to enable the creation of clinical cancer genomics automated reporting pipelines via an API. PMID:27789569

  15. The Transcription Factor Encyclopedia

    PubMed Central

    2012-01-01

    Here we present the Transcription Factor Encyclopedia (TFe), a new web-based compendium of mini review articles on transcription factors (TFs) that is founded on the principles of open access and collaboration. Our consortium of over 100 researchers has collectively contributed over 130 mini review articles on pertinent human, mouse and rat TFs. Notable features of the TFe website include a high-quality PDF generator and web API for programmatic data retrieval. TFe aims to rapidly educate scientists about the TFs they encounter through the delivery of succinct summaries written and vetted by experts in the field. TFe is available at http://www.cisreg.ca/tfe. PMID:22458515

  16. VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients.

    PubMed

    Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F; Long, Daniel J; Bolch, Wesley E; Liu, Bob; Xu, X George

    2015-07-21

    This paper describes the development and testing of VirtualDose--a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the 'software as a service (SaaS)' delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose's functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT-two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.

  17. Components for Maintaining and Publishing Earth Science Vocabularies

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.; Yu, J.

    2014-12-01

    Shared vocabularies are an important aid to geoscience data interoperability. Many organizations maintain useful vocabularies, with Geologic Surveys having a particularly long history of vocabulary and lexicon development. However, the mode of publication is heterogeneous, ranging from PDFs and HTML web pages, spreadsheets and CSV, through various user-interfaces and APIs. Update and maintenance ranges from tightly-governed and externally opaque, through various community processes, all the way to crowd-sourcing ('folksonomies'). A general expectation, however, is for greater harmonization and vocabulary re-use. In order to be successful this requires (a) standardized content formalization and APIs (b) transparent content maintenance and versioning. We have been trialling a combination of software dealing with registration, search and linking. SKOS is designed for formalizing multi-lingual, hierarchical vocabularies, and has been widely adopted in earth and environmental sciences. SKOS is an RDF vocabulary, for which SPARQL is the standard low-level API. However, for interoperability between SKOS vocabulary sources, a SKOS-based API (i.e. based on the SKOS predicates prefLabel, broader, narrower, etc) is required. We have developed SISSvoc for this purpose, and used it to deploy a number of vocabularies on behalf of the IUGS, ICS, NERC, OGC, the Australian Government, and CSIRO projects. SISSvoc Search provides simple search UI on top of one or more SISSvoc sources. Content maintenance is composed of many elements, including content-formalization, definition-update, and mappings to related vocabularies. Typically there is a degree of expert judgement required. In order to provide confidence in users, two requirements are paramount: (i) once published, a URI that denotes a vocabulary item must remain dereferenceable; (ii) the history and status of the content denoted by a URI must be available. These requirements match the standard 'registration' paradigm which is implemented in the Linked Data Registry, which is currently used by WMO and the UK Environment Agency for publication of vocabularies. Together, these components provide a powerful and flexible system for providing earth science vocabularies for the community, consistent with semantic web and linked-data principles.

  18. Tailoring Earth Observation To Ranchers For Improved Land Management And Profitability: The VegMachine Online Project

    NASA Astrophysics Data System (ADS)

    Scarth, P.; Trevithick, B.; Beutel, T.

    2016-12-01

    VegMachine Online is a freely available browser application that allows ranchers across Australia to view and interact with satellite derived ground cover state and change maps on their property and extract this information in a graphical format using interactive tools. It supports the delivery and communication of a massive earth observation data set in an accessible, producer friendly way . Around 250,000 Landsat TM, ETM and OLI images were acquired across Australia, converted to terrain corrected surface reflectance and masked for cloud, cloud shadow, terrain shadow and water. More than 2500 field sites across the Australian rangelands were used to derive endmembers used in a constrained unmixing approach to estimate the per-pixel proportion of bare, green and non-green vegetation for all images. A seasonal metoid compositing method was used to produce national fractional cover virtual mosaics for each three month period since 1988. The time series of green fraction is used to estimate the persistent green due to tree and shrub canopies, and this estimate is used to correct the fractional cover to ground cover for our mixed tree-grass rangeland systems. Finally, deciles are produced for key metrics every season to track a pixels relativity to the entire time series. These data are delivered through time series enabled web mapping services and customised web processing services that enable the full time series over any spatial extent to be interrogated in seconds via a RESTful interface. These services interface with a front end browser application that provides product visualization for any date in the time series, tools to draw or import polygon boundaries, plot time series ground cover comparisons, look at the effect of historical rainfall and tools to run the revised universal soil loss equation in web time to assess the effect of proposed changes in cover retention. VegMachine Online is already being used by ranchers monitoring paddock condition, organisations supporting land management initiatives in Great Barrier Reef catchments, by students developing tools to understand land condition and degradation and the underlying data and APIs are supporting several other land condition mapping tools.

  19. Digital patient: Personalized and translational data management through the MyHealthAvatar EU project.

    PubMed

    Kondylakis, Haridimos; Spanakis, Emmanouil G; Sfakianakis, Stelios; Sakkalis, Vangelis; Tsiknakis, Manolis; Marias, Kostas; Xia Zhao; Hong Qing Yu; Feng Dong

    2015-08-01

    The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient's health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.

  20. WorldWide Telescope in Research and Education

    NASA Astrophysics Data System (ADS)

    Goodman, A.; Fay, J.; Muench, A.; Pepe, A.; Udompraseret, P.; Wong, C.

    2012-09-01

    The WorldWide Telescope computer program, released to researchers and the public as a free resource in 2008 by Microsoft Research, has changed the way the ever-growing Universe of online astronomical data is viewed and understood. The WWT program can be thought of as a scriptable, interactive, richly visual browser of the multi-wavelength Sky as we see it from Earth, and of the Universe as we would travel within it. In its web API format, WWT is being used as a service to display professional research data. In its desktop format, WWT works in concert (thanks to SAMP and other IVOA standards) with more traditional research applications such as ds9, Aladin and TOPCAT. The WWT Ambassadors Program (founded in 2009) recruits and trains astrophysically-literate volunteers (including retirees) who use WWT as a teaching tool in online, classroom, and informal educational settings. Early quantitative studies of WWTA indicate that student experiences with WWT enhance science learning dramatically. Thanks to the wealth of data it can access, and the growing number of services to which it connects, WWT is now a key linking technology in the Seamless Astronomy environment we seek to offer researchers, teachers, and students alike.

  1. ClinGen Pathogenicity Calculator: a configurable system for assessing pathogenicity of genetic variants.

    PubMed

    Patel, Ronak Y; Shah, Neethu; Jackson, Andrew R; Ghosh, Rajarshi; Pawliczek, Piotr; Paithankar, Sameer; Baker, Aaron; Riehle, Kevin; Chen, Hailin; Milosavljevic, Sofia; Bizon, Chris; Rynearson, Shawn; Nelson, Tristan; Jarvik, Gail P; Rehm, Heidi L; Harrison, Steven M; Azzariti, Danielle; Powell, Bradford; Babb, Larry; Plon, Sharon E; Milosavljevic, Aleksandar

    2017-01-12

    The success of the clinical use of sequencing based tests (from single gene to genomes) depends on the accuracy and consistency of variant interpretation. Aiming to improve the interpretation process through practice guidelines, the American College of Medical Genetics and Genomics (ACMG) and the Association for Molecular Pathology (AMP) have published standards and guidelines for the interpretation of sequence variants. However, manual application of the guidelines is tedious and prone to human error. Web-based tools and software systems may not only address this problem but also document reasoning and supporting evidence, thus enabling transparency of evidence-based reasoning and resolution of discordant interpretations. In this report, we describe the design, implementation, and initial testing of the Clinical Genome Resource (ClinGen) Pathogenicity Calculator, a configurable system and web service for the assessment of pathogenicity of Mendelian germline sequence variants. The system allows users to enter the applicable ACMG/AMP-style evidence tags for a specific allele with links to supporting data for each tag and generate guideline-based pathogenicity assessment for the allele. Through automation and comprehensive documentation of evidence codes, the system facilitates more accurate application of the ACMG/AMP guidelines, improves standardization in variant classification, and facilitates collaborative resolution of discordances. The rules of reasoning are configurable with gene-specific or disease-specific guideline variations (e.g. cardiomyopathy-specific frequency thresholds and functional assays). The software is modular, equipped with robust application program interfaces (APIs), and available under a free open source license and as a cloud-hosted web service, thus facilitating both stand-alone use and integration with existing variant curation and interpretation systems. The Pathogenicity Calculator is accessible at http://calculator.clinicalgenome.org . By enabling evidence-based reasoning about the pathogenicity of genetic variants and by documenting supporting evidence, the Calculator contributes toward the creation of a knowledge commons and more accurate interpretation of sequence variants in research and clinical care.

  2. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    PubMed

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  3. Flexible Web services integration: a novel personalised social approach

    NASA Astrophysics Data System (ADS)

    Metrouh, Abdelmalek; Mokhati, Farid

    2018-05-01

    Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.

  4. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  5. HydroShare: A Platform for Collaborative Data and Model Sharing in Hydrology

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Couch, A.; Hooper, R. P.; Dash, P. K.; Stealey, M.; Yi, H.; Bandaragoda, C.; Castronova, A. M.

    2017-12-01

    HydroShare is an online, collaboration system for sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined by standardized content types for data formats and models commonly used in hydrology. With HydroShare you can: Share your data and models with colleagues; Manage who has access to the content that you share; Share, access, visualize and manipulate a broad set of hydrologic data types and models; Use the web services application programming interface (API) to program automated and client access; Publish data and models and obtain a citable digital object identifier (DOI); Aggregate your resources into collections; Discover and access data and models published by others; Use web apps to visualize, analyze and run models on data in HydroShare. This presentation will describe the functionality and architecture of HydroShare highlighting its use as a virtual environment supporting education and research. HydroShare has components that support: (1) resource storage, (2) resource exploration, and (3) web apps for actions on resources. The HydroShare data discovery, sharing and publishing functions as well as HydroShare web apps provide the capability to analyze data and execute models completely in the cloud (servers remote from the user) overcoming desktop platform limitations. The HydroShare GIS app provides a basic capability to visualize spatial data. The HydroShare JupyterHub Notebook app provides flexible and documentable execution of Python code snippets for analysis and modeling in a way that results can be shared among HydroShare users and groups to support research collaboration and education. We will discuss how these developments can be used to support different types of educational efforts in Hydrology where being completely web based is of value in an educational setting as students can all have access to the same functionality regardless of their computer.

  6. NASA's Lunar and Planetary Mapping and Modeling Program

    NASA Astrophysics Data System (ADS)

    Law, E.; Day, B. H.; Kim, R. M.; Bui, B.; Malhotra, S.; Chang, G.; Sadaqathullah, S.; Arevalo, E.; Vu, Q. A.

    2016-12-01

    NASA's Lunar and Planetary Mapping and Modeling Program produces a suite of online visualization and analysis tools. Originally designed for mission planning and science, these portals offer great benefits for education and public outreach (EPO), providing access to data from a wide range of instruments aboard a variety of past and current missions. As a component of NASA's Science EPO Infrastructure, they are available as resources for NASA STEM EPO programs, and to the greater EPO community. As new missions are planned to a variety of planetary bodies, these tools are facilitating the public's understanding of the missions and engaging the public in the process of identifying and selecting where these missions will land. There are currently three web portals in the program: the Lunar Mapping and Modeling Portal or LMMP (http://lmmp.nasa.gov), Vesta Trek (http://vestatrek.jpl.nasa.gov), and Mars Trek (http://marstrek.jpl.nasa.gov). Portals for additional planetary bodies are planned. As web-based toolsets, the portals do not require users to purchase or install any software beyond current web browsers. The portals provide analysis tools for measurement and study of planetary terrain. They allow data to be layered and adjusted to optimize visualization. Visualizations are easily stored and shared. The portals provide 3D visualization and give users the ability to mark terrain for generation of STL files that can be directed to 3D printers. Such 3D prints are valuable tools in museums, public exhibits, and classrooms - especially for the visually impaired. Along with the web portals, the program supports additional clients, web services, and APIs that facilitate dissemination of planetary data to a range of external applications and venues. NASA challenges and hackathons are also providing members of the software development community opportunities to participate in tool development and leverage data from the portals.

  7. Activity-Centric Approach to Distributed Programming

    NASA Technical Reports Server (NTRS)

    Levy, Renato; Satapathy, Goutam; Lang, Jun

    2004-01-01

    The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.

  8. Proposal for a Web Encoding Service (wes) for Spatial Data Transactio

    NASA Astrophysics Data System (ADS)

    Siew, C. B.; Peters, S.; Rahman, A. A.

    2015-10-01

    Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.

  9. Helioviewer.org: Simple Solar and Heliospheric Data Visualization

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Mueller, D.

    2011-12-01

    Helioviewer.org is a free and open-source web application for exploring solar physics data in a simple and intuitive manner. Over the past several years, Helioviewer.org has enabled thousands of users from across the globe to explore the inner heliosphere, providing access to over ten million images from the SOHO, SDO, and STEREO missions. While Helioviewer.org has seen a surge in use by the public in recent months, it is still ultimately a science tool. The newest version of Helioviewer.org provides access to science-quality data for all available images through the Virtual Solar Observatory (VSO). In addition to providing a powerful platform for browsing heterogeneous sets of solar data, Helioviewer.org also seeks to be as flexible and extensible as possible, providing access to much of its functionality via a simple Application Programming Interface (API). Recently, the Helioviewer.org API was used for two such applications: a Wordpress plugin, and a Python library for solar physics data analysis (SunPy). These applications are discussed and examples of API usage are provided. Finally, Helioviewer.org is undergoing continual development, with new features being added on a regular basis. Recent updates to Helioviewer.org are discussed, along with a preview of things to come.

  10. Development of an asynchronous communication channel between wireless sensor nodes, smartphone devices, and web applications using RESTful Web Services for intelligent farming

    NASA Astrophysics Data System (ADS)

    De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel

    2017-09-01

    Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarocki, John Charles; Zage, David John; Fisher, Andrew N.

    LinkShop is a software tool for applying the method of Linkography to the analysis time-sequence data. LinkShop provides command line, web, and application programming interfaces (API) for input and processing of time-sequence data, abstraction models, and ontologies. The software creates graph representations of the abstraction model, ontology, and derived linkograph. Finally, the tool allows the user to perform statistical measurements of the linkograph and refine the ontology through direct manipulation of the linkograph.

  12. Aladin Lite: Lightweight sky atlas for browsers

    NASA Astrophysics Data System (ADS)

    Boch, Thomas

    2014-02-01

    Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.

  13. Dynamic selection mechanism for quality of service aware web services

    NASA Astrophysics Data System (ADS)

    D'Mello, Demian Antony; Ananthanarayana, V. S.

    2010-02-01

    A web service is an interface of the software component that can be accessed by standard Internet protocols. The web service technology enables an application to application communication and interoperability. The increasing number of web service providers throughout the globe have produced numerous web services providing the same or similar functionality. This necessitates the use of tools and techniques to search the suitable services available over the Web. UDDI (universal description, discovery and integration) is the first initiative to find the suitable web services based on the requester's functional demands. However, the requester's requirements may also include non-functional aspects like quality of service (QoS). In this paper, the authors define a QoS model for QoS aware and business driven web service publishing and selection. The authors propose a QoS requirement format for the requesters, to specify their complex demands on QoS for the web service selection. The authors define a tree structure called quality constraint tree (QCT) to represent the requester's variety of requirements on QoS properties having varied preferences. The paper proposes a QoS broker based architecture for web service selection, which facilitates the requesters to specify their QoS requirements to select qualitatively optimal web service. A web service selection algorithm is presented, which ranks the functionally similar web services based on the degree of satisfaction of the requester's QoS requirements and preferences. The paper defines web service provider qualities to distinguish qualitatively competitive web services. The paper also presents the modelling and selection mechanism for the requester's alternative constraints defined on the QoS. The authors implement the QoS broker based system to prove the correctness of the proposed web service selection mechanism.

  14. Secure and Resilient Cloud Computing for the Department of Defense

    DTIC Science & Technology

    2015-11-16

    platform as a service (PaaS), and software as a service ( SaaS )—that target system administrators, developers, and end-users respectively (see Table 2...interfaces (API) and services Medium Amazon Elastic MapReduce, MathWorks Cloud, Red Hat OpenShift SaaS Full-fledged applications Low Google gMail

  15. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  16. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  17. Data Mining and the Twitter Platform for Prescribed Burn and Wildfire Incident Reporting with Geospatial Applications

    NASA Astrophysics Data System (ADS)

    Endsley, K.; McCarty, J. L.

    2012-12-01

    Data mining techniques have been applied to social media in a variety of contexts, from mapping the evolution of the Tahrir Square protests in Egypt to predicting influenza outbreaks. The Twitter platform is a particular favorite due to its robust application programming interface (API) and high throughput. Twitter, Inc. estimated in 2011 that over 2,200 messages or "tweets" are generated every second. Also helpful is Twitter's semblance in operation to the short message service (SMS), better known as "texting," available on cellular phones and the most popular means of wide telecommunications in many developing countries. In the United States, Twitter has been used by a number of federal, state and local officials as well as motivated individuals to report prescribed burns in advance (sometimes as part of a reporting obligation) or to communicate the emergence, response to, and containment of wildfires. These reports are unstructured and, like all Twitter messages, limited to 140 UTF-8 characters. Through internal research and development at the Michigan Tech Research Institute, the authors have developed a data mining routine that gathers potential tweets of interest using the Twitter API, eliminates duplicates ("retweets"), and extracts relevant information such as the approximate size and condition of the fire. Most importantly, the message is geocoded and/or contains approximate locational information, allowing for prescribed and wildland fires to be mapped. Natural language processing techniques, adapted to improve computational performance, are used to tokenize and tag these elements for each tweet. The entire routine is implemented in the Python programming language, using open-source libraries. As such, it is demonstrated in a web-based framework where prescribed burns and/or wildfires are mapped in real time, visualized through a JavaScript-based mapping client in any web browser. The practices demonstrated here generalize to an SMS platform (or any short text-based platform) and thus provide exciting opportunities for the cultivation of fire or other disaster alerts and response here in the U.S. and in the developing world.

  18. STEREO In-situ Data Analysis

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Davis, A. J.; Russell, C. T.

    2007-05-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The PLASTIC instrument takes plasma ion composition measurements completing STEREO's comprehensive in-situ perspective. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. The uniqueness of the STEREO mission requires novel data analysis tools and techniques to take advantage of the mission's full scientific potential. An interactive browser with the ability to create publication-quality plots has been developed which integrates STEREO's in-situ data with data from a variety of other missions including WIND and ACE. Static summary plots and a key-parameter type data set with a related online browser provide alternative data access. Finally, an application program interface (API) is provided allowing users to create custom software that ties directly into STEREO's data set. The API allows for more advanced forms of data mining than currently available through most web-based data services. A variety of data access techniques and the development of cross- spacecraft data analysis tools allow the larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  19. NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Yang; L.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the developers of APIs and how to use them to form a physics application to the users. For example, how the channels are related to magnet and what the current real-time setting of a magnet is in physics unit are the internals of APIs. Measuring chromaticities are the users of APIs. All the users of APIs are working with magnet and instrument names in a physics unit. The low level communications in current or voltage unit are minimized. In this paper, we discussed our recent progress of our infrastructure development, and client API.« less

  20. An integrated WebGIS framework for volunteered geographic information and social media in soil and water conservation.

    PubMed

    Werts, Joshua D; Mikhailova, Elena A; Post, Christopher J; Sharp, Julia L

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  1. An Integrated WebGIS Framework for Volunteered Geographic Information and Social Media in Soil and Water Conservation

    NASA Astrophysics Data System (ADS)

    Werts, Joshua D.; Mikhailova, Elena A.; Post, Christopher J.; Sharp, Julia L.

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  2. Pathview Web: user friendly pathway visualization and data integration.

    PubMed

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Targeted expansion project for outreach and treatment for substance abuse and HIV risk behaviors in Asian and Pacific Islander communities.

    PubMed

    Nemoto, Tooru; Iwamoto, Mariko; Kamitani, Emiko; Morris, Anne; Sakata, Maria

    2011-04-01

    Access to culturally competent HIV/AIDS and substance abuse treatment and prevention services is limited for Asian and Pacific Islanders (APIs). Based on the intake data for a community outreach project in the San Francisco Bay Area (N = 1,349), HIV risk behaviors were described among the targeted API risk groups. The self-reported HIV prevalence was 6% among MSM. Inconsistent condom use for vaginal sex with casual partners in the past 6 months was reported among substance users (43%) and incarcerated participants (60%), whereas 26% of men who have sex with men reported inconsistent condom use for anal sex with casual partners. Overall, 56% and 29% had engaged in sex with casual partners under the influence of alcohol and drugs in the past 6 months, respectively. Although API organizations in the Bay Area have spearheaded HIV prevention, future programs must address substance use issues in relation to sexual risk behaviors, specific to API risk groups.

  4. The development, validity, and reliability of the Addiction Profile Index (API).

    PubMed

    Ögel, Kültegin; Evren, Cüneyt; Karadağ, Figen; Gürol, Defne Tamar

    2012-01-01

    The objective of this study was to develop a practical questionnaire for multidimensional assessment of problems associated with alcohol and substance abuse that would also be useful for treatment planning. The Addiction Profile Index (API) is a self-report questionnaire consisting of 37 items and the following 5 subscales: characteristics of substance use; dependency diagnosis; the effects of subsance use on the user; craving; motivation to quit using substances. The study included 345 alcohol and/or substance abusers from 2 addiction treatment clinics and a prison addiction service. The validity of the questionnaire was assessed using the Michigan Alcoholism Screening Test (MAST), Readiness to Change Questionnaire (SOCRATES), Penn Alcohol Craving Scale (PACS), Drug Craving Scale (DCS), Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I), and Addiction Severity Index (ASI). The Cronbach's alpha coefficient for the total API was 0.89 and for the subscales it ranged from 0.63 to 0.86. Item-total correlation coefficients ranged from 0.42 to 0.89. The Spearman Brown split-half method coefficient for the total API was 0.83. In all, 4 factors were obtained using explanatory factor analysis that represented 52.3% of the total variance. The API craving subscale was observed to be consistent with PACS and the API motivation subscale was consistent with SOCRATES. The API total score was strongly correlated with the mean MAST score, and the composite ASI medical status, substance use, legal status, and family social relations subscale scores. Based on ROC analyses, the area under curve was 0.90. With a total API cut-off score of 4, the scale's sensitivity and specificity 0.85 was 0.78, respectively. The findings show that the API is a valid and reliable questionnaire that can be used to measure the severity of different dimensions of substance dependency.

  5. Mining twitter: a source for psychological wisdom of the crowds.

    PubMed

    Reips, Ulf-Dietrich; Garaizar, Pablo

    2011-09-01

    Over the last few years, microblogging has gained prominence as a form of personal broadcasting media where information and opinion are mixed together without an established order, usually tightly linked with current reality. Location awareness and promptness provide researchers using the Internet with the opportunity to create "psychological landscapes"--that is, to detect differences and changes in voiced (twittered) emotions, cognitions, and behaviors. In our article, we present iScience Maps, a free Web service for researchers, available from http://maps.iscience.deusto.es/ and http://tweetminer.eu/ . Technologically, the service is based on Twitter's streaming and search application programming interfaces (APIs), accessed through several PHP libraries, and a JavaScript frontend. This service allows researchers to assess via Twitter the effect of specific events in different places as they are happening and to make comparisons between cities, regions, or countries regarding psychological states and their evolution in the course of an event. In a step-by-step example, it is shown how to replicate a study on affective and personality characteristics inferred from first names (Mehrabian & Piercy, Personality and Social Psychology Bulletin, 19, 755-758 1993) by mining Twitter data with iScience Maps.Results from the original study are replicated in both world regions we tested (the western U.S. and the U.K./Ireland); we also discover base rate of names to be a confound that needs to be controlled for in future research.

  6. Building a Secure and Feature-rich Mobile Mapping Service App Using HTML5: Challenges and Best Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar; Patlolla, Dilip Reddy; Sorokine, Alexandre

    Managing a wide variety of mobile devices across multiple mobile operating systems is a security challenge for any organization [1, 2]. With the wide adoption of mobile devices to access work-related apps, there is an increase in third-party apps that might either misuse or improperly handle user s personal or sensitive data [3]. HTML5 has been receiving wide attention for developing cross-platform mobile apps. According to International Data Corporation (IDC), by 2015, 80% of all mobile apps will be based in part or wholly upon HTML5 [4]. Though HTML5 provides a rich set of features for building an app, itmore » is a challenge for organizations to deploy and manage HTML5 apps on wide variety of devices while keeping security policies intact. In this paper, we will describe an upcoming secure mobile environment for HTML5 apps, called Sencha Space that addresses these issues and discuss how it will be used to design and build a secure and cross-platform mobile mapping service app. We will also describe how HTML5 and a new set of related technologies such as Geolocation API, WebGL, Open Layers 3, and Local Storage, can be used to provide a high end and high performance experience for users of the mapping service app.« less

  7. WebProtégé: a collaborative Web-based platform for editing biomedical ontologies.

    PubMed

    Horridge, Matthew; Tudorache, Tania; Nuylas, Csongor; Vendetti, Jennifer; Noy, Natalya F; Musen, Mark A

    2014-08-15

    WebProtégé is an open-source Web application for editing OWL 2 ontologies. It contains several features to aid collaboration, including support for the discussion of issues, change notification and revision-based change tracking. WebProtégé also features a simple user interface, which is geared towards editing the kinds of class descriptions and annotations that are prevalent throughout biomedical ontologies. Moreover, it is possible to configure the user interface using views that are optimized for editing Open Biomedical Ontology (OBO) class descriptions and metadata. Some of these views are shown in the Supplementary Material and can be seen in WebProtégé itself by configuring the project as an OBO project. WebProtégé is freely available for use on the Web at http://webprotege.stanford.edu. It is implemented in Java and JavaScript using the OWL API and the Google Web Toolkit. All major browsers are supported. For users who do not wish to host their ontologies on the Stanford servers, WebProtégé is available as a Web app that can be run locally using a Servlet container such as Tomcat. Binaries, source code and documentation are available under an open-source license at http://protegewiki.stanford.edu/wiki/WebProtege. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Enabling Tools and Methods for International, Inter-disciplinary and Educational Collaboration

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Hoijarvi, K.; Falke, S.; Fialkowski, E.; Kieffer, M.; Husar, R. B.

    2008-05-01

    In the past, collaboration has taken place in tightly-knit workgroups where the members had direct connections to each other. Such collaboration was confined to small workgroups and person-to-person communication. Recent developments through the Internet foster virtual workgroups and organizations where dynamic, 'just-in-time' collaboration can take place over a much larger scale. The emergence of virtual workgroups has strongly influenced the interaction of inter-national, inter-disciplinary, as well as educational activities. In this paper we present an array of enabling tools and methods that incorporate the new technologies including web services, software mashups, tag-based structuring and searching, and wikis for collaborative writing and content organization. Large monolithic, 'do-it-all' software tools are giving way to web service modules, combined through service chaining. Application software can now be created using Service Oriented Architecture (SOA). In the air quality community, data providers and users are distributed in space and time creating barriers for data access. By exposing the data on the internet the space, time barriers are lessened. The federated data system, DataFed, developed at Washington University, accesses data from autonomous, distributed providers. Through data "wrappers", DataFed provides uniform and standards-based access services to heterogeneous, distributed data. Service orientation not only lowers the entry resistance for service providers, but it also allows the creation of user-defined applications and/or mashups. For example, Google Earth's open API allowed many groups to mash their content with Google Earth. Ad hoc tagging gives a rich description of the internet resources, but it has the disadvantage of providing a fuzzy schema. The semantic uniformity of the internet resources can be improved by controlled tagging which apply a consistent namespace and tag combinations to diverse objects. One example of this is the photo-sharing web application Flickr. Just like data, by exposing photos through the internet those can be reused in ways unknown and unanticipated by the provider. For air quality application, Flickr allowed a rich collection of images of forest fire smoke, wind blown dust and haze events to be tagged with controlled tags and used in for evaluating subtle features of the events. Wikis, originally used just for collaboratively writing and discuss documents, are now also a social software workflow managers. In air quality data, wikis provides the means to collaboratively create rich metadata. Wikis become a virtual meeting place to discuss ideas before a workshop of conference, display tagged internet resources, and collaboratively work on documents. Wikis are also useful in the classroom. For instance in class projects, the wiki displays harvested resources, maintains collaborative documents and discussions and is the organizational memory for the project.

  9. Detecting Heap-Spraying Code Injection Attacks in Malicious Web Pages Using Runtime Execution

    NASA Astrophysics Data System (ADS)

    Choi, Younghan; Kim, Hyoungchun; Lee, Donghoon

    The growing use of web services is increasing web browser attacks exponentially. Most attacks use a technique called heap spraying because of its high success rate. Heap spraying executes a malicious code without indicating the exact address of the code by copying it into many heap objects. For this reason, the attack has a high potential to succeed if only the vulnerability is exploited. Thus, attackers have recently begun using this technique because it is easy to use JavaScript to allocate the heap memory area. This paper proposes a novel technique that detects heap spraying attacks by executing a heap object in a real environment, irrespective of the version and patch status of the web browser. This runtime execution is used to detect various forms of heap spraying attacks, such as encoding and polymorphism. Heap objects are executed after being filtered on the basis of patterns of heap spraying attacks in order to reduce the overhead of the runtime execution. Patterns of heap spraying attacks are based on analysis of how an web browser accesses benign web sites. The heap objects are executed forcibly by changing the instruction register into the address of them after being loaded into memory. Thus, we can execute the malicious code without having to consider the version and patch status of the browser. An object is considered to contain a malicious code if the execution reaches a call instruction and then the instruction accesses the API of system libraries, such as kernel32.dll and ws_32.dll. To change registers and monitor execution flow, we used a debugger engine. A prototype, named HERAD(HEap spRAying Detector), is implemented and evaluated. In experiments, HERAD detects various forms of exploit code that an emulation cannot detect, and some heap spraying attacks that NOZZLE cannot detect. Although it has an execution overhead, HERAD produces a low number of false alarms. The processing time of several minutes is negligible because our research focuses on detecting heap spraying. This research can be applied to existing systems that collect malicious codes, such as Honeypot.

  10. Eldercare Locator

    MedlinePlus

    ... About Resources Welcome to the Eldercare Locator, a public service of the U.S. Administration on Aging connecting you to services for older ... Modified: 11/16/2016 1:05:04 PM Administration on Aging Feedback | Admin Portal | API ... FOIA | Plain Writing | No Fear Act | ACL.gov | USA.gov | HHS.gov

  11. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Kevin

    The software provides a simple web api to allow users to request a time window where a file will not be removed from cache. HPSS provides the concept of a "purge lock". When a purge lock is set on a file, the file will not be removed from disk, entering tape only state. A lot of network file protocols assume a file is on disk so it is good to purge lock a file before transferring using one of those protocols. HPSS's purge lock system is very coarse grained though. A file is either purge locked or not. Nothing enforcesmore » quotas, timely unlocking of purge locks, or managing the races inherent with multiple users wanting to lock/unlock the same file. The Purge Lock Server lets you, through a simple REST API, specify a list of files to purge lock and an expire time, and the system will ensure things happen properly.« less

  13. When Will It Be... USNO Seasons and Apsides Calculator

    NASA Astrophysics Data System (ADS)

    Chizek Frouard, Malynda; Bartlett, Jennifer Lynn

    2018-01-01

    The turning of the Earth’s seasons (solstices and equinoxes) and apsides (perihelions and aphelions) are times often used in observational astronomy and also of interest to the public. To avoid tedious calculations, the U.S. Naval Observatory (USNO) has developed an on-line interactive calculator, Earth’s Seasons and Apsides to provide information about events between 1600 and 2200. The new data service uses an Application Programming Interface (API), which returns values in JavaScript Object Notation (JSON) that can be incorporated into third-party websites or applications. For a requested year, the Earth’s Seasons and Apsides API provides the Gregorian calendar date and time of the Vernal Equinox, Summer Solstice, Autumnal Equinox, Winter Solstice, Aphelion, and Perihelion. The user may specify the time zone for their results, including the optional addition of U.S. daylight saving time for years after 1966.On-line documentation for using the API-enabled Earth’s Seasons and Apsides is available, including sample calls (http://aa.usno.navy.mil/data/docs/api.php). A traditional forms-based interface is available as well (http://aa.usno.navy.mil/data/docs/EarthSeasons.php). This data service replaces the popular Earth's Seasons: Equinoxes, Solstices, Perihelion, and Aphelion page that provided a static list of events for 2000–2025. The USNO also provides API-enabled data services for Complete Sun and Moon Data for One Day (http://aa.usno.navy.mil/data/docs/RS_OneDay.php), Dates of the Primary Phases of the Moon (http://aa.usno.navy.mil/data/docs/MoonPhase.php), Selected Christian Observances (http://aa.usno.navy.mil/data/docs/easter.php), Selected Islamic Observances (http://aa.usno.navy.mil/data/docs/islamic.php), Selected Jewish Observances (http://aa.usno.navy.mil/data/docs/passover.php), Julian Date Conversion (http://aa.usno.navy.mil/data/docs/JulianDate.php), and Sidereal Time (http://aa.usno.navy.mil/data/docs/siderealtime.php) as well as its Solar Eclipse Computer (http://aa.usno.navy.mil/data/docs/SolarEclipses.php).

  14. Design and Implementation of a Three-Tiered Web-Based Inventory Ordering and Tracking System Prototype Using CORBA and Java

    DTIC Science & Technology

    2000-03-01

    languages yet still be able to access the legacy relational databases that businesses have huge investments in. JDBC is a low-level API designed for...consider the return of investment . The system requirements, discussed in Chapter II, are the main source of input to developing the relational...1996. Inprise, Gatekeeper Guide, Inprise Corporation, 1999. Kroenke, D., Database Processing Fundementals , Design, and Implementation, Sixth Edition

  15. ReSEARCH: A Requirements Search Engine: Progress Report 2

    DTIC Science & Technology

    2008-09-01

    and provides a convenient user interface for the search process. Ideally, the web application would be based on Tomcat, a free Java Servlet and JSP...Implementation issues Lucene Java is an Open Source project, available under the Apache License, which provides an accessible API for the development of...from the Apache Lucene website (Lucene- java Wiki , 2008). A search application developed with Lucene consists of the same two major com- ponents

  16. SPDF Data and Orbit Services Supporting Open Access, Use and Archiving of MMS Data

    NASA Astrophysics Data System (ADS)

    McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Roberts, D. A.; Yurow, R. E.

    2015-12-01

    NASA's Space Physics Data Facility (SPDF) project is now serving MMS definitive and predictive interactive orbit plots, listings and conjunction calculations through our SSCWeb and 4D Orbit Viewer services. In March 2016 and in parallel with the MMS Science Data Center (SDC) at LASP, SPDF will begin publicly serving a complete set of MMS Level-2 and higher, survey and burst-mode science data products from all four spacecraft and all instruments. The initial Level-2 data available will be from September 2015 to early February 2016, with Level-2 products subsequently validated and publicly available with an approximate one month lag. All MMS Level-2 and higher data products are produced in standard CDF format with standard ISTP/SPDF metadata and will be served by SPDF through our CDAWeb data service, including our web services and associated APIs for IDL and Matlab users, and through direct FTP/HTTP directory browse and file downloads. SPDF's ingest, archival preservation and active serving of current MMS science data is part of our role as an active heliophysics final archive. SPDF's ingest of complete and current science data products from other active heliophysics missions with SPDF services will help enable coordinated and correlative MMS science analysis by the open international science community with current data from THEMIS, the Van Allen Probes and other missions including TWINS, Cluster, ACE, Wind, >120 ground magnetometer stations as well as instruments on the NOAA GOES and POES spacecraft. Please see the related Candey et.al. paper on "SPDF Ancillary Services and Technologies Supporting Open Access, Use and Archiving of MMS Data" for other aspects of what SPDF is doing. All SPDF data and services are available from the SPDF home page at http://spdf.gsfc.nasa.gov .

  17. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  18. VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients

    NASA Astrophysics Data System (ADS)

    Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F.; Long, Daniel J.; Bolch, Wesley E.; Liu, Bob; Xu, X. George

    2015-07-01

    This paper describes the development and testing of VirtualDose—a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the ‘software as a service (SaaS)’ delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose’s functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT—two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.

  19. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    PubMed Central

    Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399

  20. WebGL-enabled 3D visualization of a Solar Flare Simulation

    NASA Astrophysics Data System (ADS)

    Chen, A.; Cheung, C. M. M.; Chintzoglou, G.

    2016-12-01

    The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.

  1. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  2. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  3. Providing Multi-Page Data Extraction Services with XWRAPComposer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ling; Zhang, Jianjun; Han, Wei

    2008-04-30

    Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  4. An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services

    NASA Astrophysics Data System (ADS)

    Shah, M.; Verma, Y.; Nandakumar, R.

    2012-07-01

    Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.

  5. MedlinePlus Connect: Web Service

    MedlinePlus

    ... https://medlineplus.gov/connect/service.html MedlinePlus Connect: Web Service To use the sharing features on this ... if you implement MedlinePlus Connect by contacting us . Web Service Overview The parameters for the Web service ...

  6. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub-work flows (3) ImageX, an interactive image visualization service (3) an authentication and authorization service (4) a data service that handles archival, staging and serving of data products, and (5) a notification service that serves statistical collation and reporting needs of various projects. Several other additional components are under development. Trident is an umbrella project, that evolved from the One Degree Imager, Portal, Pipeline, and Archive (ODI-PPA) project which we had initially refactored toward (1) a powerful analysis/visualization portal for Globular Cluster System (GCS) survey data collected by IU researchers, 2) a data search and download portal for the IU Electron Microscopy Center's data (EMC-SCA), 3) a prototype archive for the Ludwig Maximilian University's Wide Field Imager. The new Trident software has been used to deploy (1) a metadata quality control and analytics portal (RADY-SCA) for DICOM formatted medical imaging data produced by the IU Radiology Center, 2) Several prototype work flows for different domains, 3) a snapshot tool within IU's Karst Desktop environment, 4) a limited component-set to serve GIS data within the IU GIS web portal. Trident SCA systems leverage supercomputing and storage resources at Indiana University but can be configured to make use of any cloud/grid resource, from local workstations/servers to (inter)national supercomputing facilities such as XSEDE.

  7. Focused Crawling of the Deep Web Using Service Class Descriptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, D; Liu, L; Critchlow, T

    2004-06-21

    Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less

  8. The IRIS Education and Outreach Program: Providing access to data and equipment for educational and public use

    NASA Astrophysics Data System (ADS)

    Taber, J.; Toigo, M.; Bravo, T. K.; Hubenthal, M.; McQuillan, P. J.; Welti, R.

    2009-12-01

    The IRIS Education and Outreach Program has been an integral part of IRIS for the past 10 years and during that time has worked to advance awareness and understanding of seismology and earth science while inspiring careers in geophysics. The focus on seismology and the use of seismic data has allowed the IRIS E&O program to develop and disseminate a unique suite of products and services for a wide range of audiences. One result of that effort has been increased access to the IRIS Data Management System by non-specialist audiences and simplified use of location and waveform data. The Seismic Monitor was one of the first Web-based tools for observing near-real-time seismicity. It continues to be the most popular IRIS web page, and thus it presents aspects of seismology to a very wide audience. For individuals interested in more detailed ground motion information, waveforms can be easily viewed using the Rapid Earthquake Viewer, developed by the University of South Carolina in collaboration with IRIS E&O. The Seismographs in Schools program gives schools the opportunity to apply for a low-cost educational seismograph and to receive training for its use in the classroom. To provide better service to the community, a new Seismographs in Schools website was developed in the past year with enhanced functions to help teachers improve their teaching of seismology. The site encourages schools to make use of seismic data and communicate with other educational seismology users throughout the world. Users can view near-real-time displays of other participating schools, upload and download data, and use the “find a teacher” tool to contact nearby schools that also may be operating seismographs. In order to promote and maintain program participation and communication, the site features a discussion forum to encourage and support the growing global community of educational seismograph users. Any data that is submitted to the Seismographs in Schools Website is also accessible in real-time via the Seismographs in Schools API. This API allows anyone to extract data, re-purpose it, and display it in various ways. This gives educators the ability to build applications on top of the IRIS Seismographs in Schools Website and also lowers the barrier for teachers who want to create their own regional school seismology networks. Recent seismic data are also provided to general audiences via our museum displays. Millions of people have interacted with IRIS/USGS museum displays, many of them at the American Museum of Natural History in New York and the Smithsonian Institution National Museum of Natural History in Washington, D.C. However, a growing number of people explore seismological concepts and near real-time data through our newest display, the Active Earth Display (AED). The AED is a smaller, more flexible version of the museum display, and is now in use at universities and visitor centers throughout the US. Served via a web browser, the display is customizable and the software is available to anyone who applies via the IRIS E&O web pages. Touch screens provide an interactive experience and new content continues to be developed, including new sets of pages focusing on the Cascadia and Basin and Range regions.

  9. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  10. Cloud-Based Speech Technology for Assistive Technology Applications (CloudCAST).

    PubMed

    Cunningham, Stuart; Green, Phil; Christensen, Heidi; Atria, José Joaquín; Coy, André; Malavasi, Massimiliano; Desideri, Lorenzo; Rudzicz, Frank

    2017-01-01

    The CloudCAST platform provides a series of speech recognition services that can be integrated into assistive technology applications. The platform and the services provided by the public API are described. Several exemplar applications have been developed to demonstrate the platform to potential developers and users.

  11. HIV and AIDS in suburban Asian and Pacific Islander communities: factors influencing self-efficacy in HIV risk reduction.

    PubMed

    Takahashi, Lois M; Magalong, Michelle G; Debell, Paula; Fasudhani, Angela

    2006-12-01

    Though AIDS case rates among Asian Pacific Islander Americans (APIs) in the United States remain relatively low, the number has been steadily increasing. Scholars, policy makers, and service providers still know little about how confident APIs are in carrying out different HIV risk reduction strategies. This article addresses this gap by presenting an analysis of a survey of API women and youth in Orange County, California (N = 313), a suburban county in southern California with large concentrations of Asian residents. Multivariate logistic regression models using subsamples of API women and API youth respondents were used. Variations in reported self-efficacy for female respondents were explained by acculturation, comfort in asking medical practitioners about HIV/AIDS, and to a lesser degree, education, household size, whether respondents were currently dating, HIV knowledge, and whether respondents believed that HIV could be identified by physical appearance. For respondents younger than 25 years, variations in self-efficacy were related to gender, age, acculturation, HIV knowledge, taking-over-the-counter medicines for illness, whether respondents were dating, and to a lesser degree, employment, recent serious illness, whether they believe that one could identify HIV by how one looks, and believing that illness was caused by germs. Implications for HIV prevention programs and future research are provided.

  12. Creating Mobile and Web Application Programming Interfaces (APIs) for NASA Science Data

    NASA Astrophysics Data System (ADS)

    Oostra, D.; Chambers, L. H.; Lewis, P. M.; Moore, S. W.

    2011-12-01

    The Atmospheric Science Data Center (ASDC) at the NASA Langley Research Center in Virginia houses almost three petabytes of data, a collection that increases every day. To put it into perspective, it is estimated that three petabytes of data storage could store a digitized copy of all printed material in U.S. research libraries. There are more than ten other NASA data centers like the ASDC. Scientists and the public use this data for research, science education, and to understand our environment. Most importantly these data provide the potential for all of us make new discoveries. NASA is about making discoveries. Galileo was quoted as saying, "All discoveries are easy to understand once they are discovered. The point is to discover them." To that end, NASA stores vast amounts of publicly available data. This paper examines an approach to create web applications that serve NASA data in ways that specifically address the mobile web application technologies that are quickly emerging. Mobile data is not a new concept. What is new, is that user driven tools have recently become available that allow users to create their own mobile applications. Through the use of these cloud-based tools users can produce complete native mobile applications. Thus, mobile apps can now be created by everyone, regardless of their programming experience or expertise. This work will explore standards and methods for creating dynamic and malleable application programming interfaces (APIs) that allow users to access and use NASA science data for their own needs. The focus will be on experiences that broaden and increase the scope and usage of NASA science data sets.

  13. Reactome graph database: Efficient access to complex pathway data

    PubMed Central

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  14. Reactome graph database: Efficient access to complex pathway data.

    PubMed

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  15. SiteDB: Marshalling people and resources available to CMS

    NASA Astrophysics Data System (ADS)

    Metson, S.; Bonacorsi, D.; Dias Ferreira, M.; Egeland, R.

    2010-04-01

    In a collaboration the size of CMS (approx. 3000 users, and almost 100 computing centres of varying size) communication and accurate information about the sites it has access to is vital in co-ordinating the multitude of computing tasks required for smooth running. SiteDB is a tool developed by CMS to track sites available to the collaboration, the allocation to CMS of resources available at those sites and the associations between CMS members and the sites (as either a manager/operator of the site or a member of a group associated to the site). It is used to track the roles a person has for an associated site or group. SiteDB eases the coordination load for the operations teams by providing a consistent interface to manage communication with the people working at a site, by identifying who is responsible for a given task or service at a site and by offering a uniform interface to information on CMS contacts and sites. SiteDB provides api's and reports for other CMS tools to use to access the information it contains, for instance enabling CRAB to use "user friendly" names when black/white listing CE's, providing role based authentication and authorisation for other web based services and populating various troubleshooting squads in external ticketing systems in use daily by CMS Computing operations.

  16. The Myths of Chinese Images Revisited: Persuasive Strategies in Hong Kong Anti-Drug Public Service Announcements.

    ERIC Educational Resources Information Center

    Wong, Wendy Siuyi; Cuklanz, Lisa M.

    Hong Kong's Department of Information Services has been producing and systematically airing public service announcements (known as announcements in the public interest, or APIs) on a variety of selected themes such as cleaning up Hong Kong, road safety, crime, drugs, and health issues for over 20 years. The television announcements are shown every…

  17. Distributed nuclear medicine applications using World Wide Web and Java technology.

    PubMed

    Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).

  18. The NASA Tournament Laboratory (NTL): Improving Data Access at PDS while Spreading Joy and Engaging Students through 16 Micro-Contests

    NASA Technical Reports Server (NTRS)

    LaMora, Andy; Raugh, A.; Erickson, K.; Grayzeck, E. J.; Knopf, W.; Morgan, T. H.

    2012-01-01

    NASA PDS hosts terabytes of valuable data from hundreds of data sources and spans decades of research. Data is stored on flat-file systems regulated through careful meta dictionaries. PDS's data is available to the public through its website which supports data searches through drill-down navigation. While the system returns data quickly, result sets in response to identical input differ depending on the drill-down path a user follows. To correct this Issue, to allow custom searching, and to improve general accessibility, PDS sought to create a new data structure and API, and to use them to build applications that are a joy to use and showcase the value of the data to students, teachers and citizens. PDS engaged TopCoder and Harvard Business School through the NTL to pursue these objectives in a pilot effort. Scope was limited to Small Bodies Node data. NTL analyzed data, proposed a solution, and implemented it through a series of micro-contests. Contest focused on different segments of the problem; conceptualization, architectural design, implementation, testing, etc. To demonstrate the utility of the completed solution, NTL developed web-based and mobile applications that can compare targets, regardless of mission. To further explore the potential of the solution NTL hosted "Mash-up" challenges that integrated the API with other publically available assets, to produce consumer and teaching applications, including an Augmented Reality iPad tool. Two contests were also posted to middle and high school students via the NoNameSite.com platform, and as a result of these contests, PDS/SBN has initiated a Facebook program. These contests defined and implemented a data warehouse with the necessary migration tools to transform legacy data, produced a public web interface for the new search, developed a public API, and produced four mobile applications that we expect to appeal to users both within and, without the academic community.

  19. The NASA Tournament Laboratory (“NTL”): Improving Data Access at PDS while Spreading Joy and Engaging Students through 16 Micro-Contests

    NASA Astrophysics Data System (ADS)

    LaMora, Andy; Raugh, A.; Erickson, K.; Grayzeck, E. J.; Knopf, W.; Lydon, M.; Lakhani, K.; Crusan, J.; Morgan, T. H.

    2012-10-01

    NASA PDS hosts terabytes of valuable data from hundreds of data sources and spans decades of research. Data is stored on flat-file systems regulated through careful meta dictionaries. PDS’s data is available to the public through its website which supports data searches through drill-down navigation. While the system returns data quickly, result sets in response to identical input differ depending on the drill-down path a user follows. To correct this issue, to allow custom searching, and to improve general accessibility, PDS sought to create a new data structure and API, and to use them to build applications that are a joy to use and showcase the value of the data to students, teachers and citizens. PDS engaged TopCoder and Harvard Business School through the NTL to pursue these objectives in a pilot effort. Scope was limited to Small Bodies Node data. NTL analyzed data, proposed a solution, and implemented it through a series of micro-contests. Contest focused on different segments of the problem; conceptualization, architectural design, implementation, testing, etc. To demonstrate the utility of the completed solution, NTL developed web-based and mobile applications that can compare targets, regardless of mission. To further explore the potential of the solution NTL hosted “Mash-up” challenges that integrated the API with other publically available assets, to produce consumer and teaching applications, including an Augmented Reality iPad tool. Two contests were also posted to middle and high school students via the NoNameSite.com platform, and as a result of these contests, PDS/SBN has initiated a Facebook program. These contests defined and implemented a data warehouse with the necessary migration tools to transform legacy data, produced a public web interface for the new search, developed a public API, and produced four mobile applications that we expect to appeal to users both within and without the academic community.

  20. A Framework for Enhancing Real-time Social Media Data to Improve Disaster Management Process

    NASA Astrophysics Data System (ADS)

    Attique Shah, Syed; Zafer Şeker, Dursun; Demirel, Hande

    2018-05-01

    Social Media datasets are playing a vital role to provide information that can support decision making in nearly all domains of technology. It is due to the fact that social media is a quick and economical approach for data collection from public through methods like crowdsourcing. It is already proved by existing research that in case of any disaster (natural or man-made) the information extracted from Social Media sites is very critical to Disaster Management Systems for response and reconstruction. This study comprises of two components, the first part proposes a framework that provides updated and filtered real time input data for the disaster management system through social media and the second part consists of a designed web user API for a structured and defined real time data input process. This study contributes to the discipline of design science for the information systems domain. The aim of this study is to propose a framework that can filter and organize data from the unstructured social media sources through recognized methods and to bring this retrieved data to the same level as that of taken through a structured and predefined mechanism of a web API. Both components are designed to a level such that they can potentially collaborate and produce updated information for a disaster management system to carry out accurate and effective.

  1. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  2. Automatic geospatial information Web service composition based on ontology interface matching

    NASA Astrophysics Data System (ADS)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  3. Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)

    NASA Astrophysics Data System (ADS)

    Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.

    2011-12-01

    The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.

  4. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    NASA Astrophysics Data System (ADS)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  5. Wired Widgets: Agile Visualization for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Gerschefske, K.; Witmer, J.

    2012-09-01

    Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.

  6. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  7. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  8. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    NASA Astrophysics Data System (ADS)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship requirements, but can provide a template for others to follow.

  9. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  10. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  11. The Use of RESTful Web Services in Medical Informatics and Clinical Research and Its Implementation in Europe.

    PubMed

    Aerts, Jozef

    2017-01-01

    RESTful web services nowadays are state-of-the-art in business transactions over the internet. They are however not very much used in medical informatics and in clinical research, especially not in Europe. To make an inventory of RESTful web services that can be used in medical informatics and clinical research, including those that can help in patient empowerment in the DACH region and in Europe, and to develop some new RESTful web services for use in clinical research and regulatory review. A literature search on available RESTful web services has been performed and new RESTful web services have been developed on an application server using the Java language. Most of the web services found originate from institutes and organizations in the USA, whereas no similar web services could be found that are made available by European organizations. New RESTful web services have been developed for LOINC codes lookup, for UCUM conversions and for use with CDISC Standards. A comparison is made between "top down" and "bottom up" web services, the latter meant to answer concrete questions immediately. The lack of RESTful web services made available by European organizations in healthcare and medical informatics is striking. RESTful web services may in short future play a major role in medical informatics, and when localized for the German language and other European languages, can help to considerably facilitate patient empowerment. This however requires an EU equivalent of the US National Library of Medicine.

  12. Enhancement of Mutual Discovery, Search, and Access of Data for Users of NASA and GEOSS-Cataloged Data Systems

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Maidment, D. R.; Rodell, M.; Strub, R. F.; Arctur, D. K.; Ames, D. P.; Vollmer, B.; Seiler, E.

    2014-12-01

    An ongoing NASA-funded project has removed a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series) for selected variables of the North American and Global Land Data Assimilation Systems (NLDAS and GLDAS, respectively) and other EOSDIS (Earth Observing System Data Information System) data sets. These time series ("data rods") are pre-generated or generated on-the-fly (OTF), leveraging the NASA Simple Subset Wizard (SSW), a gateway to NASA data centers. Data rods Web services are accessible through the CUAHSI Hydrologic Information System (HIS) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) but are not easily discoverable by users of other non-NASA data systems. The Global Earth Observation System of Systems (GEOSS) is a logical mechanism for providing access to the data rods, both pre-generated and OTF. There is an ongoing series of multi-organizational GEOSS Architecture Implementation Pilots, now in Phase-7 (AIP-7) and with a strong water sub-theme, that is aimed at the GEOSS Water Strategic Target "to produce [by 2015] comprehensive sets of data and information products to support decision-making for efficient management of the world's water resources, based on coordinated, sustained observations of the water cycle on multiple scales." The aim of this "GEOSS Water Services" project is to develop a distributed, global registry of water data, map, and modeling services catalogued using the standards and procedures of the Open Geospatial Consortium and the World Meteorological Organization. This project has already demonstrated that the GEOSS infrastructure can be leveraged to help provide access to time series of model grid information (e.g., NLDAS, GLDAS) or grids of information over a geographical domain for a particular time interval. A new NASA-funded project was begun, to expand on these early efforts to enhance the discovery, search, and access of NASA data by non-NASA users, comprising the following key aspects: 1. Leverage SSW API and EOS Clearing House (ECHO) 2. Register data rods services in GEOSS 3. Develop Web Feature Services (WFS) for data rods 4. Enhance metadata in WFS 5. Make non-NASA data visible to NASA users by leveraging SSW 6. Develop hydrological use cases to guide project deployment and serve as metrics

  13. PAVICS: A platform for the Analysis and Visualization of Climate Science - adopting a workflow-based analysis method for dealing with a multitude of climate data sources

    NASA Astrophysics Data System (ADS)

    Gauvin St-Denis, B.; Landry, T.; Huard, D. B.; Byrns, D.; Chaumont, D.; Foucher, S.

    2017-12-01

    As the number of scientific studies and policy decisions requiring tailored climate information continues to increase, the demand for support from climate service centers to provide the latest information in the format most helpful for the end-user is also on the rise. Ouranos, being one such organization based in Montreal, has partnered with the Centre de recherche informatique de Montreal (CRIM) to develop a platform that will offer climate data products that have been identified as most useful for users through years of consultation. The platform is built as modular components that target the various requirements of climate data analysis. The data components host and catalog NetCDF data as well as geographical and political delimitations. The analysis components are made available as atomic operations through Web Processing Service (WPS) or as workflows, whereby the operations are chained through a simple JSON structure and executed on a distributed network of computing resources. The visualization components range from Web Map Service (WMS) to a complete frontend for searching the data, launching workflows and interacting with maps of the results. Each component can easily be deployed and executed as an independent service through the use of Docker technology and a proxy is available to regulate user workspaces and access permissions. PAVICS includes various components from birdhouse, a collection of WPS initially developed by the German Climate Research Center (DKRZ) and Institut Pierre Simon Laplace (IPSL) and is designed to be highly interoperable with other WPS as well as many Open Geospatial Consortium (OGC) standards. Further connectivity is made with the Earth System Grid Federation (ESGF) nodes and local results are made searchable using the same API terminology. Other projects conducted by CRIM that integrate with PAVICS include the OGC Testbed 13 Innovation Program (IP) initiative that will enhance advanced cloud capabilities, application packaging deployment processes, as well as enabling Earth Observation (EO) processes relevant to climate. As part of its experimental agenda, working implementations of scalable machine learning on big climate data with Spark and SciSpark were delivered.

  14. Web service module for access to g-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.; Goranov, G.

    2012-10-01

    G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.

  15. Working with HITRAN Database Using Hapi: HITRAN Application Programming Interface

    NASA Astrophysics Data System (ADS)

    Kochanov, Roman V.; Hill, Christian; Wcislo, Piotr; Gordon, Iouli E.; Rothman, Laurence S.; Wilzewski, Jonas

    2015-06-01

    A HITRAN Application Programing Interface (HAPI) has been developed to allow users on their local machines much more flexibility and power. HAPI is a programming interface for the main data-searching capabilities of the new "HITRANonline" web service (http://www.hitran.org). It provides the possibility to query spectroscopic data from the HITRAN database in a flexible manner using either functions or query language. Some of the prominent current features of HAPI are: a) Downloading line-by-line data from the HITRANonline site to a local machine b) Filtering and processing the data in SQL-like fashion c) Conventional Python structures (lists, tuples, and dictionaries) for representing spectroscopic data d) Possibility to use a large set of third-party Python libraries to work with the data e) Python implementation of the HT lineshape which can be reduced to a number of conventional line profiles f) Python implementation of total internal partition sums (TIPS-2011) for spectra simulations g) High-resolution spectra calculation accounting for pressure, temperature and optical path length h) Providing instrumental functions to simulate experimental spectra i) Possibility to extend HAPI's functionality by custom line profiles, partitions sums and instrumental functions Currently the API is a module written in Python and uses Numpy library providing fast array operations. The API is designed to deal with data in multiple formats such as ASCII, CSV, HDF5 and XSAMS. This work has been supported by NASA Aura Science Team Grant NNX14AI55G and NASA Planetary Atmospheres Grant NNX13AI59G. L.S. Rothman et al. JQSRT, Volume 130, 2013, Pages 4-50 N.H. Ngo et al. JQSRT, Volume 129, November 2013, Pages 89-100 A. L. Laraia at al. Icarus, Volume 215, Issue 1, September 2011, Pages 391-400

  16. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  17. New Data Products and Tools from STEREO's IMPACT Investigation

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Li, Y.; Huttunen, E.; Toy, V.; Russell, C. T.; Davis, A.

    2008-05-01

    STEREO's IMPACT (In-situ Measurements of Particles and CME Transients) investigation provides the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma and suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. The IMPACT team has developed both human-friendly web portals and API's (application program interfaces) which provide straight- foward data access to scientists and the Virtual Observatories. Several new data products and web features have been developed including the release of more and higher-level data products, a library of TPLOT-based IDL data analysis routines, and web sites devoted to event analysis and inter-mission and inter-disciplinary collaboration. A web browser integrating STEREO and L1 in situ data with imaging and models provides a powerful venue for exploring the Sun-Earth connection. These latest releases enable the larger scientific community to use STEREO's unique in-situ data in novel ways along side "third eye" data sets from Wind, ACE and other heliospheric missions contributing to a better understanding of three dimensional structures in the solar wind.

  18. Mission Services Evolution Center Message Bus

    NASA Technical Reports Server (NTRS)

    Mayorga, Arturo; Bristow, John O.; Butschky, Mike

    2011-01-01

    The Goddard Mission Services Evolution Center (GMSEC) Message Bus is a robust, lightweight, fault-tolerant middleware implementation that supports all messaging capabilities of the GMSEC API. This architecture is a distributed software system that routes messages based on message subject names and knowledge of the locations in the network of the interested software components.

  19. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  20. Research on the development and preliminary application of Beijing agricultural sci-tech service hotline WebApp in agricultural consulting services

    NASA Astrophysics Data System (ADS)

    Yu, Weishui; Luo, Changshou; Zheng, Yaming; Wei, Qingfeng; Cao, Chengzhong

    2017-09-01

    To deal with the “last kilometer” problem during the agricultural science and technology information service, we analyzed the feasibility, necessity and advantages of WebApp applied to agricultural information service and discussed the modes of WebApp used in agricultural information service based on the requirements analysis and the function of WebApp. To overcome the existing App’s defects of difficult installation and weak compatibility between the mobile operating systems, the Beijing Agricultural Sci-tech Service Hotline WebApp was developed based on the HTML and JAVA technology. The WebApp has greater compatibility and simpler operation than the Native App, what’s more, it can be linked to the WeChat public platform making it spread easily and run directly without setup process. The WebApp was used to provide agricultural expert consulting services and agriculture information push, obtained a good preliminary application achievement. Finally, we concluded the creative application of WebApp in agricultural consulting services and prospected the development of WebApp in agricultural information service.

  1. High-Throughput and Low-Latency Network Communication with NetIO

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  2. Solar System Treks: Interactive Web Portals or STEM, Exploration and Beyond

    NASA Astrophysics Data System (ADS)

    Law, E.; Day, B. H.; Viotti, M.

    2017-12-01

    NASA's Solar System Treks project produces a suite of online visualization and analysis tools for lunar and planetary mapping and modeling. These portals offer great benefits for education and public outreach, providing access to data from a wide range of instruments aboard a variety of past and current missions. As a component of NASA's STEM Activation Infrastructure, they are available as resources for NASA STEM programs, and to the greater STEM community. As new missions are planned to a variety of planetary bodies, these tools facilitate public understanding of the missions and engage the public in the process of identifying and selecting where these missions will land. There are currently three web portals in the program: Moon Trek (https://moontrek.jpl.nasa.gov), Mars Trek (https://marstrek.jpl.nasa.gov), and Vesta Trek (https://vestatrek.jpl.nasa.gov). A new release of Mars Trek includes new tools and data products focusing on human landing site selection. Backed by evidence-based cognitive and computer science findings, an additional version is available for educational and public audiences in support of earning along novice-to-expert pathways, enabling authentic, real-world interaction with planetary data. Portals for additional planetary bodies are planned. As web-based toolsets, the portals do not require users to purchase or install any software beyond current web browsers. The portals provide analysis tools for measurement and study of planetary terrain. They allow data to be layered and adjusted to optimize visualization. Visualizations are easily stored and shared. The portals provide 3D visualization and give users the ability to mark terrain for generation of STL/OBJ files that can be directed to 3D printers. Such 3D prints are valuable tools in museums, public exhibits, and classrooms - especially for the visually impaired. The program supports additional clients, web services, and APIs facilitating dissemination of planetary data to external applications and venues. NASA challenges and hackathons also provide members of the software development community opportunities to participate in tool development and leverage data from the portals.

  3. Earth Science Mobile App Development for Non-Programmers

    NASA Astrophysics Data System (ADS)

    Oostra, D.; Crecelius, S.; Lewis, P.; Chambers, L. H.

    2012-08-01

    A number of cloud based visual development tools have emerged that provide methods for developing mobile applications quickly and without previous programming experience. The MY NASA DATA (MND) team would like to begin a discussion on how we can best leverage current mobile app technologies and available Earth science datasets. The MY NASA DATA team is developing an approach based on two main ideas. The first is to teach our constituents how to create mobile applications that interact with NASA datasets; the second is to provide web services or Application Programming Interfaces (APIs) that create sources of data that educators, students and scientists can use in their own mobile app development. This framework allows data providers to foster mobile application development and interaction while not becoming a software clearing house. MY NASA DATA's research has included meetings with local data providers, educators, libraries and individuals. A high level of interest has been identified from initial discussions and interviews. This overt interest combined with the marked popularity of mobile applications in our societies has created a new channel for outreach and communications with and between the science and educational communities.

  4. Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service

    PubMed Central

    Hatano, Kenji; Ohe, Kazuhiko

    2003-01-01

    Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364

  5. Extending the GI Brokering Suite to Support New Interoperability Specifications

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by public administrations. CERIF: used by CRIS (Current Research Information System) instances. HYRAX Server: a scientific dataset publishing component. This presentation will discuss these and other latest GI suite extensions implemented to support new interoperability protocols in use by the Earth Science Communities.

  6. panMetaDocs, eSciDoc, and DOIDB - an infrastructure for the curation and publication of file-based datasets for 'GFZ Data Services'

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland; Klump, Jens

    2016-04-01

    With the foundation of DataCite in 2009 and the technical infrastructure installed in the last six years it has become very easy to create citable dataset DOIs. Nowadays, dataset DOIs are increasingly accepted and required by journals in reference lists of manuscripts. In addition, DataCite provides usage statistics [1] of assigned DOIs and offers a public search API to make research data count. By linking related information to the data, they become more useful for future generations of scientists. For this purpose, several identifier systems, as ISBN for books, ISSN for journals, DOI for articles or related data, Orcid for authors, and IGSN for physical samples can be attached to DOIs using the DataCite metadata schema [2]. While these are good preconditions to publish data, free and open solutions that help with the curation of data, the publication of research data, and the assignment of DOIs in one software seem to be rare. At GFZ Potsdam we built a modular software stack that is made of several free and open software solutions and we established 'GFZ Data Services'. 'GFZ Data Services' provides storage, a metadata editor for publication and a facility to moderate minted DOIs. All software solutions are connected through web APIs, which makes it possible to reuse and integrate established software. Core component of 'GFZ Data Services' is an eSciDoc [3] middleware that is used as central storage, and has been designed along the OAIS reference model for digital preservation. Thus, data are stored in self-contained packages that are made of binary file-based data and XML-based metadata. The eSciDoc infrastructure provides access control to data and it is able to handle half-open datasets, which is useful in embargo situations when a subset of the research data are released after an adequate period. The data exchange platform panMetaDocs [4] makes use of eSciDoc's REST API to upload file-based data into eSciDoc and uses a metadata editor [5] to annotate the files with metadata. The metadata editor has a user-friendly interface with nominal lists, extensive explanations, and an interactive mapping tool to provide assistance to scientists describing the data. It is possible to deposit metadata templates to fill certain fields with default values. The metadata editor generates metadata in the schemas ISO19139, NASA GCMD DIF, and DataCite and could be extended for other schemas. panMetaDocs is able to mint dataset DOIs through DOIDB, which is our component to moderate dataset DOIs issued through 'GFZ Data Services'. DOIDB accepts metadata in the schemas ISO19139, DIF, and DataCite. In addition, DOIDB provides an OAI-PMH interface to disseminate all deposited metadata to data portals. The presentation of datasets on DOI landing pages is done though XSLT stylesheet transformation of the XML-based metadata. The landing pages have been designed to meet needs of scientists. We are able to render the metadata to different layouts. Furthermore, additional information about datasets and publications is assembled into the webpage by querying public databases on the internet. The work presented here will focus on technical details of the software stack. [1] http://stats.datacite.org [2] http://www.dlib.org/dlib/january11/starr/01starr.html [3] http://www.escidoc.org [4] http://panmetadocs.sf.net [5] http://github.com/ulbricht

  7. Final Report on the Creation of the Wind Integration National Dataset (WIND) Toolkit and API: October 1, 2013 - September 30, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Bri-Mathias

    2016-04-08

    The primary objective of this work was to create a state-of-the-art national wind resource data set and to provide detailed wind plant output data for specific sites based on that data set. Corresponding retrospective wind forecasts were also included at all selected locations. The combined information from these activities was used to create the Wind Integration National Dataset (WIND), and an extraction tool was developed to allow web-based data access.

  8. The CHORDS Portal: Lowering the Barrier for Internet Collection, Archival and Distribution of Real-Time Geophysical Observations

    NASA Astrophysics Data System (ADS)

    Martin, C.; Dye, M. J.; Daniels, M. D.; Keiser, K.; Maskey, M.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.

    2015-12-01

    The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project tackles the challenges of collecting and disseminating geophysical observational data in real-time, especially for researchers with limited IT budgets and expertise. The CHORDS Portal is a component that allows research teams to easily configure and operate a cloud-based service which can receive data from dispersed instruments, manage a rolling archive of the observations, and serve these data to any client on the Internet. The research group (user) creates a CHORDS portal simply by running a prepackaged "CHORDS appliance" on Amazon Web Services. The user has complete ownership and management of the portal. Computing expenses are typically very small. RESTful protocols are employed for delivering and fetching data from the portal, which means that any system capable of sending an HTTP GET message is capable of accessing the portal. A simple API is defined, making it straightforward for non-experts to integrate a diverse collection of field instruments. Languages with network access libraries, such as Python, sh, Matlab, R, IDL, Ruby and JavaScript (and most others) can retrieve structured data from the portal with just a few lines of code. The user's private portal provides a browser-based system for configuring, managing and monitoring the health of the integrated real-time system. This talk will highlight the design goals, architecture and agile development of the CHORDS Portal. A running portal, with operational data feeds from across the country, will be presented.

  9. Integrated platform and API for electrophysiological data

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L.; Kellner, Christian J.; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines. PMID:24795616

  10. Integrated platform and API for electrophysiological data.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Leonhardt, Aljoscha; Rautenberg, Philipp L; Kellner, Christian J; Garbers, Christian; Wachtler, Thomas

    2014-01-01

    Recent advancements in technology and methodology have led to growing amounts of increasingly complex neuroscience data recorded from various species, modalities, and levels of study. The rapid data growth has made efficient data access and flexible, machine-readable data annotation a crucial requisite for neuroscientists. Clear and consistent annotation and organization of data is not only an important ingredient for reproducibility of results and re-use of data, but also essential for collaborative research and data sharing. In particular, efficient data management and interoperability requires a unified approach that integrates data and metadata and provides a common way of accessing this information. In this paper we describe GNData, a data management platform for neurophysiological data. GNData provides a storage system based on a data representation that is suitable to organize data and metadata from any electrophysiological experiment, with a functionality exposed via a common application programming interface (API). Data representation and API structure are compatible with existing approaches for data and metadata representation in neurophysiology. The API implementation is based on the Representational State Transfer (REST) pattern, which enables data access integration in software applications and facilitates the development of tools that communicate with the service. Client libraries that interact with the API provide direct data access from computing environments like Matlab or Python, enabling integration of data management into the scientist's experimental or analysis routines.

  11. Persistence and availability of Web services in computational biology.

    PubMed

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  12. Persistence and Availability of Web Services in Computational Biology

    PubMed Central

    Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383

  13. Search without Boundaries Using Simple APIs

    USGS Publications Warehouse

    Tong, Qi

    2009-01-01

    The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link resolver, and vendor websites), and each specializes in one thing. How could the library integrate the functionalities of one application with another and provide a single point of entry for users to search across? To improve the user experience, the library launched an effort to integrate the federated search engine into the library's intranet website. The result is a simple search box that leverages the federated search engine's built-in application programming interfaces (APIs). In this article, the author describes how this project demonstrated the power of APIs and their potential to be used by other enterprise search portals inside or outside of the library.

  14. Space Physics Data Facility Web Services

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.

    2005-01-01

    The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.

  15. The EMBRACE web service collection

    PubMed Central

    Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862

  16. Open Core Data: Connecting scientific drilling data to scientists and community data resources

    NASA Astrophysics Data System (ADS)

    Fils, D.; Noren, A. J.; Lehnert, K.; Diver, P.

    2016-12-01

    Open Core Data (OCD) is an innovative, efficient, and scalable infrastructure for data generated by scientific drilling and coring to improve discoverability, accessibility, citability, and preservation of data from the oceans and continents. OCD is building on existing community data resources that manage, store, publish, and preserve scientific drilling data, filling a critical void that currently prevents linkages between these and other data systems and tools to realize the full potential of data generated through drilling and coring. We are developing this functionality through Linked Open Data (LOD) and semantic patterns that enable data access through the use of community ontologies such as GeoLink (geolink.org, an EarthCube Building Block), a collection of protocols, formats and vocabularies from a set of participating geoscience repositories. Common shared concepts of classes such as cruise, dataset, person and others allow easier resolution of common references through shared resource IDs. These graphs are then made available via SPARQL as well as incorporated into web pages following schema.org approaches. Additionally the W3C PROV vocabulary is under evaluation for use for documentation of provenance. Further, the application of persistent identifiers for samples (IGSNs); datasets, expeditions, and projects (DOIs); and people (ORCIDs), combined with LOD approaches, provides methods to resolve and incorporate metadata and datasets. Application Program Interfaces (APIs) complement these semantic approaches to the OCD data holdings. APIs are exposed following the Swagger guidelines (swagger.io) and will be evolved into the OpenAPI (openapis.org) approach. Currently APIs are in development for the NSF funded Flyover Country mobile geoscience app (fc.umn.edu), the Neotoma Paleoecology Database (neotomadb.org), Magnetics Information Consortium (MagIC; earthref.org/MagIC), and other community tools and data systems, as well as for internal OCD use.

  17. Improving usability and accessibility of cheminformatics tools for chemists through cyberinfrastructure and education.

    PubMed

    Guha, Rajarshi; Wiggins, Gary D; Wild, David J; Baik, Mu-Hyun; Pierce And, Marlon E; Fox, Geoffrey C

    Some of the latest trends in cheminformatics, computation, and the world wide web are reviewed with predictions of how these are likely to impact the field of cheminformatics in the next five years. The vision and some of the work of the Chemical Informatics and Cyberinfrastructure Collaboratory at Indiana University are described, which we base around the core concepts of e-Science and cyberinfrastructure that have proven successful in other fields. Our chemical informatics cyberinfrastructure is realized by building a flexible, generic infrastructure for cheminformatics tools and databases, exporting "best of breed" methods as easily-accessible web APIs for cheminformaticians, scientists, and researchers in other disciplines, and hosting a unique chemical informatics education program aimed at scientists and cheminformatics practitioners in academia and industry.

  18. The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.

    PubMed

    Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil

    2016-06-17

    Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.

  19. Evaluation and Verification of the Global Rapid Identification of Threats System for Infectious Diseases in Textual Data Sources.

    PubMed

    Huff, Andrew G; Breit, Nathan; Allen, Toph; Whiting, Karissa; Kiley, Christopher

    2016-01-01

    The Global Rapid Identification of Threats System (GRITS) is a biosurveillance application that enables infectious disease analysts to monitor nontraditional information sources (e.g., social media, online news outlets, ProMED-mail reports, and blogs) for infectious disease threats. GRITS analyzes these textual data sources by identifying, extracting, and succinctly visualizing epidemiologic information and suggests potentially associated infectious diseases. This manuscript evaluates and verifies the diagnoses that GRITS performs and discusses novel aspects of the software package. Via GRITS' web interface, infectious disease analysts can examine dynamic visualizations of GRITS' analyses and explore historical infectious disease emergence events. The GRITS API can be used to continuously analyze information feeds, and the API enables GRITS technology to be easily incorporated into other biosurveillance systems. GRITS is a flexible tool that can be modified to conduct sophisticated medical report triaging, expanded to include customized alert systems, and tailored to address other biosurveillance needs.

  20. Evaluation and Verification of the Global Rapid Identification of Threats System for Infectious Diseases in Textual Data Sources

    PubMed Central

    Breit, Nathan

    2016-01-01

    The Global Rapid Identification of Threats System (GRITS) is a biosurveillance application that enables infectious disease analysts to monitor nontraditional information sources (e.g., social media, online news outlets, ProMED-mail reports, and blogs) for infectious disease threats. GRITS analyzes these textual data sources by identifying, extracting, and succinctly visualizing epidemiologic information and suggests potentially associated infectious diseases. This manuscript evaluates and verifies the diagnoses that GRITS performs and discusses novel aspects of the software package. Via GRITS' web interface, infectious disease analysts can examine dynamic visualizations of GRITS' analyses and explore historical infectious disease emergence events. The GRITS API can be used to continuously analyze information feeds, and the API enables GRITS technology to be easily incorporated into other biosurveillance systems. GRITS is a flexible tool that can be modified to conduct sophisticated medical report triaging, expanded to include customized alert systems, and tailored to address other biosurveillance needs. PMID:27698665

  1. Efficacy of single and multi-metric fish-based indices in tracking anthropogenic pressures in estuaries: An 8-year case study.

    PubMed

    Martinho, Filipe; Nyitrai, Daniel; Crespo, Daniel; Pardal, Miguel A

    2015-12-15

    Facing a generalized increase in water degradation, several programmes have been implemented for protecting and enhancing the water quality and associated wildlife, which rely on ecological indicators to assess the degree of deviation from a pristine state. Here, single (species number, Shannon-Wiener H', Pielou J') and multi-metric (Estuarine Fish Assessment Index, EFAI) community-based ecological quality measures were evaluated in a temperate estuary over an 8-year period (2005-2012), and established their relationships with an anthropogenic pressure index (API). Single metric indices were highly variable and neither concordant amongst themselves nor with the EFAI. The EFAI was the only index significantly correlated with the API, indicating that higher ecological quality was associated with lower anthropogenic pressure. Pressure scenarios were related with specific fish community composition, as a result of distinct food web complexity and nursery functioning of the estuary. Results were discussed in the scope of the implementation of water protection programmes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Earthdata Developer Portal poster

    NASA Astrophysics Data System (ADS)

    Plofchan, P.

    2016-12-01

    A common theme at community conferences in the Earth science domain is the need for more integration with related services, clearer documentation for services available, and a general simplification of what it takes to leverage existing tools so that setup and administration time can be minimized and time spent researching can be maximized. NASA's Earthdata Developer Portal (EDP) is the newly-created central location for documentation related to interacting with services offered by the EOSDIS community. The EDP will provide technical documentation for APIs, process documentation that provides real-world examples of how to use existing APIs in the real world, release notes for applications and services so that the entire community can stay up to date on recent updates, and best practice suggestions to improve implementation of both front-end and back-end services. Application and service owners will own their documentation while the EDP will ingest the documentation, serve it up in an interface using both industry standard tools, such as Swagger, and custom "adapters". The content is then styled to ensure consistency with other documentation found throughout the site and will be made searchable from a single location. "Getting Started" paths will also provide those users new to the space a simple path to follow to perform common tasks such as "searching and getting data" or "hosting an application on the Earthdata platform."

  3. Design and Implementation of KSP on the Next Generation Cryptography API

    NASA Astrophysics Data System (ADS)

    Lina, Zhang

    With good seamless connectivity and higher safety, KSP (Key Storage Providers) is the inexorable trend of security requirements and development to take the place of CSP (Cryptographic Service Provider). But the study on KSP has just started in our country, and almost no reports of its implementation can be found. Based on the analysis of function modules and the architecture of Cryptography API (Next Generation (CNG)), this paper discusses the design and implementation of KSP (key storage providers) based on smart card in detail, and an example is also presented to illustrate how to use KSP in Windows Vista.

  4. Enhancing UCSF Chimera through web services

    PubMed Central

    Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.

    2014-01-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624

  5. The SubCons webserver: A user friendly web interface for state-of-the-art subcellular localization prediction.

    PubMed

    Salvatore, M; Shu, N; Elofsson, A

    2018-01-01

    SubCons is a recently developed method that predicts the subcellular localization of a protein. It combines predictions from four predictors using a Random Forest classifier. Here, we present the user-friendly web-interface implementation of SubCons. Starting from a protein sequence, the server rapidly predicts the subcellular localizations of an individual protein. In addition, the server accepts the submission of sets of proteins either by uploading the files or programmatically by using command line WSDL API scripts. This makes SubCons ideal for proteome wide analyses allowing the user to scan a whole proteome in few days. From the web page, it is also possible to download precalculated predictions for several eukaryotic organisms. To evaluate the performance of SubCons we present a benchmark of LocTree3 and SubCons using two recent mass-spectrometry based datasets of mouse and drosophila proteins. The server is available at http://subcons.bioinfo.se/. © 2017 The Protein Society.

  6. 49 CFR 195.205 - Repair, alteration and reconstruction of aboveground breakout tanks that have been in service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., examination, and material requirements of those respective standards. (3) For high pressure tanks built to API... service must be capable of withstanding the internal pressure produced by the hazardous liquid to be... approximately atmospheric pressure constructed of carbon and low alloy steel, welded or riveted, and non...

  7. Real-time GIS data model and sensor web service platform for environmental data management.

    PubMed

    Gong, Jianya; Geng, Jing; Chen, Zeqiang

    2015-01-09

    Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.

  8. Similarity Based Semantic Web Service Match

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Niu, Wenjia; Huang, Ronghuai

    Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.

  9. Boverhof's App Earns Honorable Mention in Amazon's Web Services

    Science.gov Websites

    » Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday

  10. The Marine Geoscience Data System and the Global Multi-Resolution Topography Synthesis: Online Resources for Exploring Ocean Mapping Data

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.

    2016-02-01

    The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.

  11. The Impact of an Online Crowdsourcing Diagnostic Tool on Health Care Utilization: A Case Study Using a Novel Approach to Retrospective Claims Analysis.

    PubMed

    Juusola, Jessie L; Quisel, Thomas R; Foschini, Luca; Ladapo, Joseph A

    2016-06-01

    Patients with difficult medical cases often remain undiagnosed despite visiting multiple physicians. A new online platform, CrowdMed, uses crowdsourcing to quickly and efficiently reach an accurate diagnosis for these patients. This study sought to evaluate whether CrowdMed decreased health care utilization for patients who have used the service. Novel, electronic methods of patient recruitment and data collection were utilized. Patients who completed cases on CrowdMed's platform between July 2014 and April 2015 were recruited for the study via email and screened via an online survey. After providing eConsent, participants provided identifying information used to access their medical claims data, which was retrieved through a third-party web application program interface (API). Utilization metrics including frequency of provider visits and medical charges were compared pre- and post-case resolution to assess the impact of resolving a case on CrowdMed. Of 45 CrowdMed users who completed the study survey, comprehensive claims data was available via API for 13 participants, who made up the final enrolled sample. There were a total of 221 health care provider visits collected for the study participants, with service dates ranging from September 2013 to July 2015. Frequency of provider visits was significantly lower after resolution of a case on CrowdMed (mean of 1.07 visits per month pre-resolution vs. 0.65 visits per month post-resolution, P=.01). Medical charges were also significantly lower after case resolution (mean of US $719.70 per month pre-resolution vs. US $516.79 per month post-resolution, P=.03). There was no significant relationship between study results and disease onset date, and there was no evidence of regression to the mean influencing results. This study employed technology-enabled methods to demonstrate that patients who used CrowdMed had lower health care utilization after case resolution. However, since the final sample size was limited, results should be interpreted as a case study. Despite this limitation, the statistically significant results suggest that online crowdsourcing shows promise as an efficient method of solving difficult medical cases.

  12. Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-05-01

    The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.

  13. Biological Web Service Repositories Review

    PubMed Central

    Urdidiales‐Nieto, David; Navas‐Delgado, Ismael

    2016-01-01

    Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459

  14. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    NASA Astrophysics Data System (ADS)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  15. Use of Annotations for Component and Framework Interoperability

    NASA Astrophysics Data System (ADS)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the western United States at the USDA NRCS National Water and Climate Center. PRMS is a component based modular precipitation-runoff model developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow and general basin hydrology. The new OMS 3.0 PRMS model source code is more concise and flexible as a result of using the new framework’s annotation based approach. The fully annotated components are now providing information directly for (i) model assembly and building, (ii) dataflow analysis for implicit multithreading, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Experience to date has demonstrated the multi-purpose value of using annotations. Annotations are also a feasible and practical method to enable interoperability among models and modeling frameworks. As a prototype example, model code annotations were used to generate binding and mediation code to allow the use of OMS 3.0 model components within the OpenMI context.

  16. OpenFDA: an innovative platform providing access to a wealth of FDA's publicly available data.

    PubMed

    Kass-Hout, Taha A; Xu, Zhiheng; Mohebbi, Matthew; Nelsen, Hans; Baker, Adam; Levine, Jonathan; Johanson, Elaine; Bright, Roselie A

    2016-05-01

    The objective of openFDA is to facilitate access and use of big important Food and Drug Administration public datasets by developers, researchers, and the public through harmonization of data across disparate FDA datasets provided via application programming interfaces (APIs). Using cutting-edge technologies deployed on FDA's new public cloud computing infrastructure, openFDA provides open data for easier, faster (over 300 requests per second per process), and better access to FDA datasets; open source code and documentation shared on GitHub for open community contributions of examples, apps and ideas; and infrastructure that can be adopted for other public health big data challenges. Since its launch on June 2, 2014, openFDA has developed four APIs for drug and device adverse events, recall information for all FDA-regulated products, and drug labeling. There have been more than 20 million API calls (more than half from outside the United States), 6000 registered users, 20,000 connected Internet Protocol addresses, and dozens of new software (mobile or web) apps developed. A case study demonstrates a use of openFDA data to understand an apparent association of a drug with an adverse event. With easier and faster access to these datasets, consumers worldwide can learn more about FDA-regulated products. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.

  17. OpenFDA: an innovative platform providing access to a wealth of FDA’s publicly available data

    PubMed Central

    Kass-Hout, Taha A; Mohebbi, Matthew; Nelsen, Hans; Baker, Adam; Levine, Jonathan; Johanson, Elaine; Bright, Roselie A

    2016-01-01

    Objective The objective of openFDA is to facilitate access and use of big important Food and Drug Administration public datasets by developers, researchers, and the public through harmonization of data across disparate FDA datasets provided via application programming interfaces (APIs). Materials and Methods Using cutting-edge technologies deployed on FDA’s new public cloud computing infrastructure, openFDA provides open data for easier, faster (over 300 requests per second per process), and better access to FDA datasets; open source code and documentation shared on GitHub for open community contributions of examples, apps and ideas; and infrastructure that can be adopted for other public health big data challenges. Results Since its launch on June 2, 2014, openFDA has developed four APIs for drug and device adverse events, recall information for all FDA-regulated products, and drug labeling. There have been more than 20 million API calls (more than half from outside the United States), 6000 registered users, 20,000 connected Internet Protocol addresses, and dozens of new software (mobile or web) apps developed. A case study demonstrates a use of openFDA data to understand an apparent association of a drug with an adverse event. Conclusion With easier and faster access to these datasets, consumers worldwide can learn more about FDA-regulated products. PMID:26644398

  18. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  19. The impact of web services at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.

    2015-12-01

    The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability will leverage the DMC's MUSTANG project to select data based on data quality measurements. Within five years, the DMC's web services have proven to be a robust and flexible platform that enables continued growth for the DMC. We expect continued enhancements and adoption of web services.

  20. Web service discovery among large service pools utilising semantic similarity and clustering

    NASA Astrophysics Data System (ADS)

    Chen, Fuzan; Li, Minqiang; Wu, Harris; Xie, Lingli

    2017-03-01

    With the rapid development of electronic business, Web services have attracted much attention in recent years. Enterprises can combine individual Web services to provide new value-added services. An emerging challenge is the timely discovery of close matches to service requests among large service pools. In this study, we first define a new semantic similarity measure combining functional similarity and process similarity. We then present a service discovery mechanism that utilises the new semantic similarity measure for service matching. All the published Web services are pre-grouped into functional clusters prior to the matching process. For a user's service request, the discovery mechanism first identifies matching services clusters and then identifies the best matching Web services within these matching clusters. Experimental results show that the proposed semantic discovery mechanism performs better than a conventional lexical similarity-based mechanism.

  1. A verification strategy for web services composition using enhanced stacked automata model.

    PubMed

    Nagamouttou, Danapaquiame; Egambaram, Ilavarasan; Krishnan, Muthumanickam; Narasingam, Poonkuzhali

    2015-01-01

    Currently, Service-Oriented Architecture (SOA) is becoming the most popular software architecture of contemporary enterprise applications, and one crucial technique of its implementation is web services. Individual service offered by some service providers may symbolize limited business functionality; however, by composing individual services from different service providers, a composite service describing the intact business process of an enterprise can be made. Many new standards have been defined to decipher web service composition problem namely Business Process Execution Language (BPEL). BPEL provides an initial work for forming an Extended Markup Language (XML) specification language for defining and implementing business practice workflows for web services. The problems with most realistic approaches to service composition are the verification of composed web services. It has to depend on formal verification method to ensure the correctness of composed services. A few research works has been carried out in the literature survey for verification of web services for deterministic system. Moreover the existing models did not address the verification properties like dead transition, deadlock, reachability and safetyness. In this paper, a new model to verify the composed web services using Enhanced Stacked Automata Model (ESAM) has been proposed. The correctness properties of the non-deterministic system have been evaluated based on the properties like dead transition, deadlock, safetyness, liveness and reachability. Initially web services are composed using Business Process Execution Language for Web Service (BPEL4WS) and it is converted into ESAM (combination of Muller Automata (MA) and Push Down Automata (PDA)) and it is transformed into Promela language, an input language for Simple ProMeLa Interpreter (SPIN) tool. The model is verified using SPIN tool and the results revealed better recital in terms of finding dead transition and deadlock in contrast to the existing models.

  2. Pragmatic Computing - A Semiotic Perspective to Web Services

    NASA Astrophysics Data System (ADS)

    Liu, Kecheng

    The web seems to have evolved from a syntactic web, a semantic web to a pragmatic web. This evolution conforms to the study of information and technology from the theory of semiotics. The pragmatics, concerning with the use of information in relation to the context and intended purposes, is extremely important in web service and applications. Much research in pragmatics has been carried out; but in the same time, attempts and solutions have led to some more questions. After reviewing the current work in pragmatic web, the paper presents a semiotic approach to website services, particularly on request decomposition and service aggregation.

  3. Open-source web-enabled data management, analyses, and visualization of very large data in geosciences using Jupyter, Apache Spark, and community tools

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.

    2017-12-01

    Current simulation models and sensors are producing high-resolution, high-velocity data in geosciences domain. Knowledge discovery from these complex and large size datasets require tools that are capable of handling very large data and providing interactive data analytics features to researchers. To this end, Kitware and its collaborators are producing open-source tools GeoNotebook, GeoJS, Gaia, and Minerva for geosciences that are using hardware accelerated graphics and advancements in parallel and distributed processing (Celery and Apache Spark) and can be loosely coupled to solve real-world use-cases. GeoNotebook (https://github.com/OpenGeoscience/geonotebook) is co-developed by Kitware and NASA-Ames and is an extension to the Jupyter Notebook. It provides interactive visualization and python-based analysis of geospatial data and depending the backend (KTile or GeoPySpark) can handle data sizes of Hundreds of Gigabytes to Terabytes. GeoNotebook uses GeoJS (https://github.com/OpenGeoscience/geojs) to render very large geospatial data on the map using WebGL and Canvas2D API. GeoJS is more than just a GIS library as users can create scientific plots such as vector and contour and can embed InfoVis plots using D3.js. GeoJS aims for high-performance visualization and interactive data exploration of scientific and geospatial location aware datasets and supports features such as Point, Line, Polygon, and advanced features such as Pixelmap, Contour, Heatmap, and Choropleth. Our another open-source tool Minerva ((https://github.com/kitware/minerva) is a geospatial application that is built on top of open-source web-based data management system Girder (https://github.com/girder/girder) which provides an ability to access data from HDFS or Amazon S3 buckets and provides capabilities to perform visualization and analyses on geosciences data in a web environment using GDAL and GeoPandas wrapped in a unified API provided by Gaia (https://github.com/OpenDataAnalytics/gaia). In this presentation, we will discuss core features of each of these tools and will present lessons learned on handling large data in the context of data management, analyses and visualization.

  4. Smart Cities Intelligence System (SMACiSYS) Integrating Sensor Web with Spatial Data Infrastructures (sensdi)

    NASA Astrophysics Data System (ADS)

    Bhattacharya, D.; Painho, M.

    2017-09-01

    The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.

  5. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.

  6. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less

  7. QoS measurement of workflow-based web service compositions using Colored Petri net.

    PubMed

    Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra

    2014-01-01

    Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.

  8. Genetics Home Reference: Pallister-Killian mosaic syndrome

    MedlinePlus

    ... qualified healthcare professional . About Selection Criteria for Links Data Files & API Site Map Subscribe Customer Support USA.gov Copyright Privacy Accessibility FOIA Viewers & Players U.S. Department of Health & Human Services National Institutes of Health National Library of ...

  9. Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF

    NASA Technical Reports Server (NTRS)

    Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.

    2001-01-01

    The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.

  10. Enhancing UCSF Chimera through web services.

    PubMed

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  12. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  13. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  14. Teaching Tectonics to Undergraduates with Web GIS

    NASA Astrophysics Data System (ADS)

    Anastasio, D. J.; Bodzin, A.; Sahagian, D. L.; Rutzmoser, S.

    2013-12-01

    Geospatial reasoning skills provide a means for manipulating, interpreting, and explaining structured information and are involved in higher-order cognitive processes that include problem solving and decision-making. Appropriately designed tools, technologies, and curriculum can support spatial learning. We present Web-based visualization and analysis tools developed with Javascript APIs to enhance tectonic curricula while promoting geospatial thinking and scientific inquiry. The Web GIS interface integrates graphics, multimedia, and animations that allow users to explore and discover geospatial patterns that are not easily recognized. Features include a swipe tool that enables users to see underneath layers, query tools useful in exploration of earthquake and volcano data sets, a subduction and elevation profile tool which facilitates visualization between map and cross-sectional views, drafting tools, a location function, and interactive image dragging functionality on the Web GIS. The Web GIS platform is independent and can be implemented on tablets or computers. The GIS tool set enables learners to view, manipulate, and analyze rich data sets from local to global scales, including such data as geology, population, heat flow, land cover, seismic hazards, fault zones, continental boundaries, and elevation using two- and three- dimensional visualization and analytical software. Coverages which allow users to explore plate boundaries and global heat flow processes aided learning in a Lehigh University Earth and environmental science Structural Geology and Tectonics class and are freely available on the Web.

  15. User Needs of Digital Service Web Portals: A Case Study

    ERIC Educational Resources Information Center

    Heo, Misook; Song, Jung-Sook; Seol, Moon-Won

    2013-01-01

    The authors examined the needs of digital information service web portal users. More specifically, the needs of Korean cultural portal users were examined as a case study. The conceptual framework of a web-based portal is that it is a complex, web-based service application with characteristics of information systems and service agents. In…

  16. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  17. A framework to monitor activities of satellite data processing in real-time

    NASA Astrophysics Data System (ADS)

    Nguyen, M. D.; Kryukov, A. P.

    2018-01-01

    Space Monitoring Data Center (SMDC) of SINP MSU is one of the several centers in the world that collects data on the radiational conditions in near-Earth orbit from various Russian (Lomonosov, Electro-L1, Electro-L2, Meteor-M1, Meteor-M2, etc.) and foreign (GOES 13, GOES 15, ACE, SDO, etc.) satellites. The primary purposes of SMDC are: aggregating heterogeneous data from different sources; providing a unified interface for data retrieval, visualization, analysis, as well as development and testing new space weather models; and controlling the correctness and completeness of data. Space weather models rely on data provided by SMDC to produce forecasts. Therefore, monitoring the whole data processing cycle is crucial for further success in the modeling of physical processes in near-Earth orbit based on the collected data. To solve the problem described above, we have developed a framework called Live Monitor at SMDC. Live Monitor allows watching all stages and program components involved in each data processing cycle. All activities of each stage are logged by Live Monitor and shown in real-time on a web interface. When an error occurs, a notification message will be sent to satellite operators via email and the Telegram messenger service so that they could take measures in time. The Live Monitor’s API can be used to create a customized monitoring service with minimum coding.

  18. PChopper: high throughput peptide prediction for MRM/SRM transition design.

    PubMed

    Afzal, Vackar; Huang, Jeffrey T-J; Atrih, Abdel; Crowther, Daniel J

    2011-08-15

    The use of selective reaction monitoring (SRM) based LC-MS/MS analysis for the quantification of phosphorylation stoichiometry has been rapidly increasing. At the same time, the number of sites that can be monitored in a single LC-MS/MS experiment is also increasing. The manual processes associated with running these experiments have highlighted the need for computational assistance to quickly design MRM/SRM candidates. PChopper has been developed to predict peptides that can be produced via enzymatic protein digest; this includes single enzyme digests, and combinations of enzymes. It also allows digests to be simulated in 'batch' mode and can combine information from these simulated digests to suggest the most appropriate enzyme(s) to use. PChopper also allows users to define the characteristic of their target peptides, and can automatically identify phosphorylation sites that may be of interest. Two application end points are available for interacting with the system; the first is a web based graphical tool, and the second is an API endpoint based on HTTP REST. Service oriented architecture was used to rapidly develop a system that can consume and expose several services. A graphical tool was built to provide an easy to follow workflow that allows scientists to quickly and easily identify the enzymes required to produce multiple peptides in parallel via enzymatic digests in a high throughput manner.

  19. Multi-Point Observations of the Inner Magnetosphere from the Van Allen Probes and Related Missions at NASA's Space Physics Data Facility (SPDF)

    NASA Astrophysics Data System (ADS)

    McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; Papitashvili, N. E.; Roberts, D. A.

    2014-12-01

    A wide range of current, public, science-quality particle and field data from the Van Allen Probes and related missions is ingested, archived and served to the international science community by SPDF. As an active heliophysics final archive, SPDF now serves 100+ Level-2 and Level-3 data products that fully span the range of measurements from particles and plasmas (RBSPICE, ECT) through magnetic-electric fields and waves (EMFISIS, EFW). This collection of mission data (in a standard CDF format with standard ISTP/SPDF) is available through SPDF's CDAWeb user interface, through CDAWeb's web services and associated APIs for IDL and Matlab users, and through direct FTP/HTTP download access. These data are supplemented with orbit displays through our SSCWeb and 4D Orbit Viewer services and HDP/VSPO direct links to investigator sites/resources. This range of data in CDAWeb makes comparison of data among instruments and spacecraft much easier, as well as comparisons and analysis of these data with current data from other missions including THEMIS, TWINS, Cluster, ACE, Wind and now >120 ground magnetometer stations. In addition, SPDF supports data from the BARREL Antarctic balloon program and new data from instruments on the NOAA GOES and POES spacecraft. SPDF will add public data from the MMS mission to this collection when launched in 2015.

  20. A snapshot of 3649 Web-based services published between 1994 and 2017 shows a decrease in availability after 2 years.

    PubMed

    Osz, Ágnes; Pongor, Lorinc Sándor; Szirmai, Danuta; Gyorffy, Balázs

    2017-12-08

    The long-term availability of online Web services is of utmost importance to ensure reproducibility of analytical results. However, because of lack of maintenance following acceptance, many servers become unavailable after a short period of time. Our aim was to monitor the accessibility and the decay rate of published Web services as well as to determine the factors underlying trends changes. We searched PubMed to identify publications containing Web server-related terms published between 1994 and 2017. Automatic and manual screening was used to check the status of each Web service. Kruskall-Wallis, Mann-Whitney and Chi-square tests were used to evaluate various parameters, including availability, accessibility, platform, origin of authors, citation, journal impact factor and publication year. We identified 3649 publications in 375 journals of which 2522 (69%) were currently active. Over 95% of sites were running in the first 2 years, but this rate dropped to 84% in the third year and gradually sank afterwards (P < 1e-16). The mean half-life of Web services is 10.39 years. Working Web services were published in journals with higher impact factors (P = 4.8e-04). Services published before the year 2000 received minimal attention. The citation of offline services was less than for those online (P = 0.022). The majority of Web services provide analytical tools, and the proportion of databases is slowly decreasing. Conclusions. Almost one-third of Web services published to date went out of service. We recommend continued support of Web-based services to increase the reproducibility of published results. © The Author 2017. Published by Oxford University Press.

Top