Science.gov

Sample records for achieve semantic interoperability

  1. Semantically Interoperable XML Data.

    PubMed

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-09-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  2. Semantically Interoperable XML Data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  3. Real Time Semantic Interoperability in AD HOC Networks of Geospatial Data Sources: Challenges, Achievements and Perspectives

    NASA Astrophysics Data System (ADS)

    Mostafavi, M. A.; Bakillah, M.

    2012-07-01

    Recent advances in geospatial technologies have made available large amount of geospatial data. Meanwhile, new developments in Internet and communication technologies created a shift from isolated geospatial databases to ad hoc networks of geospatial data sources, where data sources can join or leave the network, and form groups to share data and services. However, effective integration and sharing of geospatial data among these data sources and their users are hampered by semantic heterogeneities. These heterogeneities affect the spatial, temporal and thematic aspects of geospatial concepts. There have been many efforts to address semantic interoperability issues in the geospatial domain. These efforts were mainly focused on resolving heterogeneities caused by different and implicit representations of the concepts. However, many approaches have focused on the thematic aspects, leaving aside the explicit representation of spatial and temporal aspects. Also, most semantic interoperability approaches for networks have focused on automating the semantic mapping process. However, the ad hoc network structure is continuously modified by source addition or removal, formation of groups, etc. This dynamic aspect is often neglected in those approaches. This paper proposes a conceptual framework for real time semantic interoperability in ad hoc networks of geospatial data sources. The conceptual framework presents the fundamental elements of real time semantic interoperability through a hierarchy of interrelated semantic states and processes. Then, we use the conceptual framework to set the discussion on the achievements that have already been made, the challenges that remain to be addressed and perspectives with respect to these challenges.

  4. Enabling Semantic Interoperability for Earth System Science

    NASA Astrophysics Data System (ADS)

    Raskin, R.

    2004-12-01

    Data interoperability across heterogeneous systems can be hampered by differences in terminology, particularly when multiple scientific communities are involved. To reconcile differences in semantics, a common semantic framework was created as a collection of ontologies. Such a shared understanding of concepts enables ontology-aware software tools to understand the meaning of terms in documents and web pages. The ontologies were created as part of the Semantic Web for Earth and Environmental Terminology (SWEET) prototype. The ontologies provide a representation of Earth system science knowledge and associated data, organized in a scalable structure, bulding on the keywords developed by the NASA Global Change Master Directory (GCMD). An integrated search tool consults the ontologies to enable searches without an exact term match. The ontologies can be used within other applications (such as Earth Science Markup Language descriptors) and future semantic web services in Earth system science.

  5. ARGOS Policy Brief on Semantic Interoperability

    PubMed Central

    KALRA, Dipak; MUSEN, Mark; SMITH, Barry; CEUSTERS, Werner

    2016-01-01

    Semantic interoperability is one of the priority themes of the ARGOS Trans-Atlantic Observatory. This topic represents a globally recognised challenge that must be addressed if electronic health records are to be shared among heterogeneous systems, and the information in them exploited to the maximum benefit of patients, professionals, health services, research, and industry. Progress in this multi-faceted challenge has been piecemeal, and valuable lessons have been learned, and approaches discovered, in Europe and in the US that can be shared and combined. Experts from both continents have met at three ARGOS workshops during 2010 and 2011 to share understanding of these issues and how they might be tackled collectively from both sides of the Atlantic. This policy brief summarises the problems and the reasons why they are important to tackle, and also why they are so difficult. It outlines the major areas of semantic innovation that exist and that are available to help address this challenge. It proposes a series of next steps that need to be championed on both sides of the Atlantic if further progress is to be made in sharing and analysing electronic health records meaningfully. Semantic interoperability requires the use of standards, not only for EHR data to be transferred and structurally mapped into a receiving repository, but also for the clinical content of the EHR to be interpreted in conformity with the original meanings intended by its authors. Wide-scale engagement with professional bodies, globally, is needed to develop these clinical information standards. Accurate and complete clinical documentation, faithful to the patient’s situation, and interoperability between systems, require widespread and dependable access to published and maintained collections of coherent and quality-assured semantic resources, including models such as archetypes and templates that would (1) provide clinical context, (2) be mapped to interoperability standards for EHR data

  6. Providing semantic interoperability between clinical care and clinical research domains.

    PubMed

    Laleci, Gokce Banu; Yuksel, Mustafa; Dogac, Asuman

    2013-03-01

    Improving the efficiency with which clinical research studies are conducted can lead to faster medication innovation and decreased time to market for new drugs. To increase this efficiency, the parties involved in a regulated clinical research study, namely, the sponsor, the clinical investigator and the regulatory body, each with their own software applications, need to exchange data seamlessly. However, currently, the clinical research and the clinical care domains are quite disconnected because each use different standards and terminology systems. In this article, we describe an initial implementation of the Semantic Framework developed within the scope of SALUS project to achieve interoperability between the clinical research and the clinical care domains. In our Semantic Framework, the core ontology developed for semantic mediation is based on the shared conceptual model of both of these domains provided by the BRIDG initiative. The core ontology is then aligned with the extracted semantic models of the existing clinical care and research standards as well as with the ontological representations of the terminology systems to create a model of meaning for enabling semantic mediation. Although SALUS is a research and development effort rather than a product, the current SALUS knowledge base contains around 4.7 million triples representing BRIDG DAM, HL7 CDA model, CDISC standards and several terminology ontologies. In order to keep the reasoning process within acceptable limits without sacrificing the quality of mediation, we took an engineering approach by developing a number of heuristic mechanisms. The results indicate that it is possible to build a robust and scalable semantic framework with a solid theoretical foundation for achieving interoperability between the clinical research and clinical care domains. PMID:23008263

  7. Achieving interoperability for accountable care.

    PubMed

    Bordenick, Jennifer Covich; Okubo, Tracy H; Kontur, Alex; Siddiqui, Nadeen

    2015-02-01

    Based on findings of a recent survey, accountable care organizations should keep eight points in mind as they seek to establish interoperability among their provider constituents: Create a shared governance structure to make IT decisions. Conduct a readiness assessment and gap analysis. Reconfigure the technology infrastructure and processes to support new value-based care delivery protocols. Consider targeting programs around high-risk groups. Develop real-time data-sharing systems. Ensure privacy and security policies and procedures are in place. Assess and address workforce issues expeditiously. Participate in broader interoperability efforts. PMID:26665540

  8. Semantic Interoperability in Clinical Decision Support Systems: A Systematic Review.

    PubMed

    Marco-Ruiz, Luis; Bellika, Johan Gustav

    2015-01-01

    The interoperability of Clinical Decision Support (CDS) systems with other health information systems has become one of the main limitations to their broad adoption. Semantic interoperability must be granted in order to share CDS modules across different health information systems. Currently, numerous standards for different purposes are available to enable the interoperability of CDS systems. We performed a literature review to identify and provide an overview of the available standards that enable CDS interoperability in the areas of clinical information, decision logic, terminology, and web service interfaces. PMID:26262260

  9. Achieving Interoperability through Data Virtualization

    NASA Astrophysics Data System (ADS)

    Xing, Z.

    2015-12-01

    Data Interoperability is a challenging problem. Different approaches exist.In this presentation, we would like to share our experienceon webification science (w10n-sci), an information technology thatvirtualizes arbitrary data resources and makes them directly usablevia a simple and uniform application programmable interface.W10n-sci has been successfully applied to all major NASA scientificdisciplines and used by an increasing number of missions and projects.We will provide an overview of w10n-sci and elaborate onhow it can help data users in a data world that diversity always prevails.

  10. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services.

    PubMed

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; Md, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  11. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services

    PubMed Central

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; MD, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  12. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will

  13. Open PHACTS: semantic interoperability for drug discovery.

    PubMed

    Williams, Antony J; Harland, Lee; Groth, Paul; Pettifer, Stephen; Chichester, Christine; Willighagen, Egon L; Evelo, Chris T; Blomberg, Niklas; Ecker, Gerhard; Goble, Carole; Mons, Barend

    2012-11-01

    Open PHACTS is a public-private partnership between academia, publishers, small and medium sized enterprises and pharmaceutical companies. The goal of the project is to deliver and sustain an 'open pharmacological space' using and enhancing state-of-the-art semantic web standards and technologies. It is focused on practical and robust applications to solve specific questions in drug discovery research. OPS is intended to facilitate improvements in drug discovery in academia and industry and to support open innovation and in-house non-public drug discovery research. This paper lays out the challenges and how the Open PHACTS project is hoping to address these challenges technically and socially. PMID:22683805

  14. An adaptive semantic based mediation system for data interoperability among Health Information Systems.

    PubMed

    Khan, Wajahat Ali; Khattak, Asad Masood; Hussain, Maqbool; Amin, Muhammad Bilal; Afzal, Muhammad; Nugent, Christopher; Lee, Sungyoung

    2014-08-01

    Heterogeneity in the management of the complex medical data, obstructs the attainment of data level interoperability among Health Information Systems (HIS). This diversity is dependent on the compliance of HISs with different healthcare standards. Its solution demands a mediation system for the accurate interpretation of data in different heterogeneous formats for achieving data interoperability. We propose an adaptive AdapteR Interoperability ENgine mediation system called ARIEN, that arbitrates between HISs compliant to different healthcare standards for accurate and seamless information exchange to achieve data interoperability. ARIEN stores the semantic mapping information between different standards in the Mediation Bridge Ontology (MBO) using ontology matching techniques. These mappings are provided by our System for Parallel Heterogeneity (SPHeRe) matching system and Personalized-Detailed Clinical Model (P-DCM) approach to guarantee accuracy of mappings. The realization of the effectiveness of the mappings stored in the MBO is evaluation of the accuracy in transformation process among different standard formats. We evaluated our proposed system with the transformation process of medical records between Clinical Document Architecture (CDA) and Virtual Medical Record (vMR) standards. The transformation process achieved over 90 % of accuracy level in conversion process between CDA and vMR standards using pattern oriented approach from the MBO. The proposed mediation system improves the overall communication process between HISs. It provides an accurate and seamless medical information exchange to ensure data interoperability and timely healthcare services to patients. PMID:24964780

  15. Semantic Integration for Marine Science Interoperability Using Web Technologies

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.

    2008-12-01

    The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example

  16. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  17. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. PMID:23751263

  18. Using ontologies to improve semantic interoperability in health data.

    PubMed

    Liyanage, Harshana; Krause, Paul; De Lusignan, Simon

    2015-01-01

    The present-day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual person-based records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies. PMID:26245245

  19. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  20. CityGML - Interoperable semantic 3D city models

    NASA Astrophysics Data System (ADS)

    Gröger, Gerhard; Plümer, Lutz

    2012-07-01

    CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its

  1. Semantic interoperability between clinical and public health information systems for improving public health services.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2007-01-01

    Improving public health services requires comprehensively integrating all services including medical, social, community, and public health ones. Therefore, developing integrated health information services has to start considering business process, rules and information semantics of involved domains. The paper proposes a business and information architecture for the specification of a future-proof national integrated system, concretely the requirements for semantic integration between public health surveillance and clinical information systems. The architecture is a semantically interoperable approach because it describes business process, rules and information semantics based on national policy documents and expressed in a standard language such us the Unified Modeling Language UML. Having the enterprise and information models formalized, semantically interoperable Health IT components/services development is supported. PMID:17901617

  2. A Semantic Cooperation and Interoperability Platform for the European Chambers of Commerce

    NASA Astrophysics Data System (ADS)

    Missikoff, Michele; Taglino, Francesco

    The LD-CAST project aims at developing a semantic cooperation and interoperability platform for the European Chambers of Commerce. Some of the key issues that this platform addresses are: The variety and number of different kinds of resources (i.e., business processes, concrete services) that concur to achieve a business service The diversity of cultural and procedural models emerging when composing articulated cross-country services The limited possibility of reusing similar services in different contexts (for instance, supporting the same service between different countries: an Italian-Romanian cooperation is different from an Italian-Polish one) The objective of the LD-CAST platform, and in particular of the semantic services provided therein, is to address the above problems with flexible solutions. We aim at introducing high levels of flexibility, both at the time of development of business processes and concrete services (i.e., operational services offered by service providers), with the possibility of dynamically binding c-services to the selected BP, according to user needs. To this end, an approach based on semantic services and a reference ontology has been proposed.

  3. Reporting Device Observations for semantic interoperability of surgical devices and clinical information systems.

    PubMed

    Andersen, Björn; Ulrich, Hannes; Rehmann, Daniel; Kock, Ann-Kristin; Wrage, Jan-Hinrich; Ingenerf, Josef

    2015-08-01

    Service-oriented medical device architectures make the progress from interdisciplinary research projects to international standardisation: A new set of IEEE 11073 proposals shall pave the way to industry acceptance. This expected availability of device observations in a standardised representation enables secondary usage if interoperability with clinical information systems can be achieved. The Device Observation Reporter (DOR) described in this work is a gateway that connects these realms. After a user chooses a selection of signals from different devices in the digital operating room, the DOR records these semantically described values for a specified duration. Upon completion, the signals descriptions and values are transformed to Health Level Seven version 2 messages and sent to a hospital information system/electronic health record system within the clinical IT network. The successful integration of device data for documentation and usage in clinical information systems can further leverage the novel device communication standard proposals. Complementing these, an Integrating the Healthcare Enterprise profile will aid commercial implementers in achieving interoperability. Their solutions could incorporate clinical knowledge to autonomously select signal combinations and generate reports of diagnostic and interventional procedures, thus saving time and effort for surgical documentation. PMID:26736610

  4. Facilitating Semantic Interoperability Among Ocean Data Systems: ODIP-R2R Student Outcomes

    NASA Astrophysics Data System (ADS)

    Stocks, K. I.; Chen, Y.; Shepherd, A.; Chandler, C. L.; Dockery, N.; Elya, J. L.; Smith, S. R.; Ferreira, R.; Fu, L.; Arko, R. A.

    2014-12-01

    With informatics providing an increasingly important set of tools for geoscientists, it is critical to train the next generation of scientists in information and data techniques. The NSF-supported Rolling Deck to Repository (R2R) Program works with the academic fleet community to routinely document, assess, and preserve the underway sensor data from U.S. research vessels. The Ocean Data Interoperability Platform (ODIP) is an EU-US-Australian collaboration fostering interoperability among regional e-infrastructures through workshops and joint prototype development. The need to align terminology between systems is a common challenge across all of the ODIP prototypes. Five R2R students were supported to address aspects of semantic interoperability within ODIP. Developing a vocabulary matching service that links terms from different vocabularies with similar concept. The service implements Google Refine reconciliation service interface such that users can leverage Google Refine application as a friendly user interface while linking different vocabulary terms. Developing Resource Description Framework (RDF) resources that map Shipboard Automated Meteorological Oceanographic System (SAMOS) vocabularies to internationally served vocabularies. Each SAMOS vocabulary term (data parameter and quality control flag) will be described as an RDF resource page. These RDF resources allow for enhanced discoverability and retrieval of SAMOS data by enabling data searches based on parameter. Improving data retrieval and interoperability by exposing data and mapped vocabularies using Semantic Web technologies. We have collaborated with ODIP participating organizations in order to build a generalized data model that will be used to populate a SPARQL endpoint in order to provide expressive querying over our data files. Mapping local and regional vocabularies used by R2R to those used by ODIP partners. This work is described more fully in a companion poster. Making published Linked Data

  5. A framework for semantic interoperability in healthcare: a service oriented architecture based on health informatics standards.

    PubMed

    Ryan, Amanda; Eklund, Peter

    2008-01-01

    Healthcare information is composed of many types of varying and heterogeneous data. Semantic interoperability in healthcare is especially important when all these different types of data need to interact. Presented in this paper is a solution to interoperability in healthcare based on a standards-based middleware software architecture used in enterprise solutions. This architecture has been translated into the healthcare domain using a messaging and modeling standard which upholds the ideals of the Semantic Web (HL7 V3) combined with a well-known standard terminology of clinical terms (SNOMED CT). PMID:18487823

  6. Achieving Interoperability in GEOSS - How Close Are We?

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Khalsa, S. S.; Browdy, S. F.

    2010-12-01

    A primary goal of the Global Earth Observing System of System (GEOSS) is improving the interoperability between the observational, modelling, data assimilation, and prediction systems contributed by member countries. The GEOSS Common Infrastructure (GCI) comprises the elements designed to enable discovery and access to these diverse data and information sources. But to what degree can the mechanisms for accessing these data, and the data themselves, be considered interoperable? Will the separate efforts by Communities of Practice within GEO to build their own portals, such as for Energy, Biodiversity, and Air Quality, lead to fragmentation or synergy? What communication and leadership do we need with these communities to improve interoperability both within and across such communities? The Standards and Interoperability Forum (SIF) of GEO's Architecture and Data Committee has assessed progress towards achieving the goal of global interoperability and made recommendations regarding evolution of the architecture and overall data strategy to ensure fulfillment of the GEOSS vision. This presentation will highlight the results of this study, and directions for further work.

  7. An Approach to Semantic Interoperability for Improved Capability Exchanges in Federations of Systems

    ERIC Educational Resources Information Center

    Moschoglou, Georgios

    2013-01-01

    This study seeks an affirmative answer to the question whether a knowledge-based approach to system of systems interoperation using semantic web standards and technologies can provide the centralized control of the capability for exchanging data and services lacking in a federation of systems. Given the need to collect and share real-time…

  8. RuleML-Based Learning Object Interoperability on the Semantic Web

    ERIC Educational Resources Information Center

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  9. Sharing meanings: developing interoperable semantic technologies to enhance reproducibility in earth and environmental science research

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.

    2015-12-01

    Earth and environmental scientists are familiar with the entities, processes, and theories germane to their field of study, and comfortable collecting and analyzing data in their area of interest. Yet, while there appears to be consistency and agreement as to the scientific "terms" used to describe features in their data and analyses, aside from a few fundamental physical characteristics—such as mass or velocity-- there can be broad tolerances, if not considerable ambiguity, in how many earth science "terms" map to the underlying "concepts" that they actually represent. This ambiguity in meanings, or "semantics", creates major problems for scientific reproducibility. It greatly impedes the ability to replicate results—by making it difficult to determine the specifics of the intended meanings of terms such as deforestation or carbon flux -- as to scope, composition, magnitude, etc. In addition, semantic ambiguity complicates assemblage of comparable data for reproducing results, due to ambiguous or idiosyncratic labels for measurements, such as percent cover of forest, where the term "forest" is undefined; or where a reported output of "total carbon-emissions" might just include CO2 emissions, but not methane emissions. In this talk, we describe how the NSF-funded DataONE repository for earth and environmental science data (http://dataone.org), is using W3C-standard languages (RDF/OWL) to build an ontology for clarifying concepts embodied in heterogeneous data and model outputs. With an initial focus on carbon cycling concepts using terrestrial biospheric model outputs and LTER productivity data, we describe how we are achieving interoperability with "semantic vocabularies" (or ontologies) from aligned earth and life science domains, including OBO-foundry ontologies such as ENVO and BCO; the ISO/OGC O&M; and the NSF Earthcube GeoLink project. Our talk will also discuss best practices that may be helpful for other groups interested in constructing their own

  10. Interoperability and different ways of knowing: How semantics can aid in cross-cultural understanding

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Parsons, M. A.; Duerr, R. E.; Fox, P. A.; Khalsa, S. S.; McCusker, J. P.; McGuinness, D. L.

    2012-12-01

    differences in its application. Furthermore, it is an analog encoding scheme whose meaning has evolved over time. By semantically modeling the egg code, its subtle variations, and how it connects to other data, we illustrate a mechanism for translating across data formats and representations. But there are limits to what semantically modeling the egg-code can achieve. The egg-code and common operational sea ice formats do not address community needs, notably the timing and processes of sea ice freeze-up and break-up which have profound impact on local hunting, shipping, oil exploration, and safety. We work with local experts from four very different Indigenous communities and scientific creators of sea ice forecasts to establish an understanding of concepts and terminology related to fall freeze-up and spring break up from the individually represented regions. This helps expand our conceptions of sea ice while also aiding in understanding across cultures and communities, and in passing knowledge to younger generations. This is an early step to expanding concepts of interoperability to very different ways of knowing to make data truly relevant and locally useful.

  11. Interoperable cross-domain semantic and geospatial framework for automatic change detection

    NASA Astrophysics Data System (ADS)

    Kuo, Chiao-Ling; Hong, Jung-Hong

    2016-01-01

    With the increasingly diverse types of geospatial data established over the last few decades, semantic interoperability in integrated applications has attracted much interest in the field of Geographic Information System (GIS). This paper proposes a new strategy and framework to process cross-domain geodata at the semantic level. This framework leverages the semantic equivalence of concepts between domains through bridge ontology and facilitates the integrated use of different domain data, which has been long considered as an essential superiority of GIS, but is impeded by the lack of understanding about the semantics implicitly hidden in the data. We choose the task of change detection to demonstrate how the introduction of ontology concept can effectively make the integration possible. We analyze the common properties of geodata and change detection factors, then construct rules and summarize possible change scenario for making final decisions. The use of topographic map data to detect changes in land use shows promising success, as far as the improvement of efficiency and level of automation is concerned. We believe the ontology-oriented approach will enable a new way for data integration across different domains from the perspective of semantic interoperability, and even open a new dimensionality for the future GIS.

  12. Investigating the Semantic Interoperability of Laboratory Data Exchanged Using LOINC Codes in Three Large Institutions

    PubMed Central

    Lin, Ming–Chin; Vreeman, Daniel J.; Huff, Stanley M.

    2011-01-01

    LOINC codes are seeing increased use in many organizations. In this study, we examined the barriers to semantic interoperability that still exist in electronic data exchange of laboratory results even when LOINC codes are being used as the observation identifiers. We analyzed semantic interoperability of laboratory data exchanged using LOINC codes in three large institutions. To simplify the analytic process, we divided the laboratory data into quantitative and non-quantitative tests. The analysis revealed many inconsistencies even when LOINC codes are used to exchange laboratory data. For quantitative tests, the most frequent problems were inconsistencies in the use of units of measure: variations in the strings used to represent units (unrecognized synonyms), use of units that result in different magnitudes of the numeric quantity, and missing units of measure. For non-quantitative tests, the most frequent problems were acronyms/synonyms, different classes of elements in enumerated lists, and the use of free text. Our findings highlight the limitations of interoperability in current laboratory reporting. PMID:22195138

  13. Interoperability Between Coastal Web Atlases Using Semantic Mediation: A Case Study of the International Coastal Atlas Network (ICAN)

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Lassoued, Y.; Dwyer, N.; Haddad, T.; Bermudez, L. E.; Dunne, D.

    2009-12-01

    Coastal mapping plays an important role in informing marine spatial planning, resource management, maritime safety, hazard assessment and even national sovereignty. As such, there is now a plethora of data/metadata catalogs, pre-made maps, tabular and text information on resource availability and exploitation, and decision-making tools. A recent trend has been to encapsulate these in a special class of web-enabled geographic information systems called a coastal web atlas (CWA). While multiple benefits are derived from tailor-made atlases, there is great value added from the integration of disparate CWAs. CWAs linked to one another can query more successfully to optimize planning and decision-making. If a dataset is missing in one atlas, it may be immediately located in another. Similar datasets in two atlases may be combined to enhance study in either region. *But how best to achieve semantic interoperability to mitigate vague data queries, concepts or natural language semantics when retrieving and integrating data and information?* We report on the development of a new prototype seeking to interoperate between two initial CWAs: the Marine Irish Digital Atlas (MIDA) and the Oregon Coastal Atlas (OCA). These two mature atlases are used as a testbed for more regional connections, with the intent for the OCA to use lessons learned to develop a regional network of CWAs along the west coast, and for MIDA to do the same in building and strengthening atlas networks with the UK, Belgium, and other parts of Europe. Our prototype uses semantic interoperability via services harmonization and ontology mediation, allowing local atlases to use their own data structures, and vocabularies (ontologies). We use standard technologies such as OGC Web Map Services (WMS) for delivering maps, and OGC Catalogue Service for the Web (CSW) for delivering and querying ISO-19139 metadata. The metadata records of a given CWA use a given ontology of terms called local ontology. Human or machine

  14. Cohort Selection and Management Application Leveraging Standards-based Semantic Interoperability and a Groovy DSL.

    PubMed

    Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril

    2016-01-01

    This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments. PMID:27570644

  15. Cohort Selection and Management Application Leveraging Standards-based Semantic Interoperability and a Groovy DSL

    PubMed Central

    Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril

    2016-01-01

    This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments. PMID:27570644

  16. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    PubMed

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-01-01

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse. PMID:18999040

  17. Case Study for Integration of an Oncology Clinical Site in a Semantic Interoperability Solution based on HL7 v3 and SNOMED-CT: Data Transformation Needs.

    PubMed

    Ibrahim, Ahmed; Bucur, Anca; Perez-Rey, David; Alonso, Enrique; de Hoog, Matthy; Dekker, Andre; Marshall, M Scott

    2015-01-01

    This paper describes the data transformation pipeline defined to support the integration of a new clinical site in a standards-based semantic interoperability environment. The available datasets combined structured and free-text patient data in Dutch, collected in the context of radiation therapy in several cancer types. Our approach aims at both efficiency and data quality. We combine custom-developed scripts, standard tools and manual validation by clinical and knowledge experts. We identified key challenges emerging from the several sources of heterogeneity in our case study (systems, language, data structure, clinical domain) and implemented solutions that we will further generalize for the integration of new sites. We conclude that the required effort for data transformation is manageable which supports the feasibility of our semantic interoperability solution. The achieved semantic interoperability will be leveraged for the deployment and evaluation at the clinical site of applications enabling secondary use of care data for research. This work has been funded by the European Commission through the INTEGRATE (FP7-ICT-2009-6-270253) and EURECA (FP7-ICT-2011-288048) projects. PMID:26306242

  18. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  19. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG). PMID:17512259

  20. The Role of Ontologies for Sustainable, Semantically Interoperable and Trustworthy EHR Solutions

    PubMed Central

    BLOBEL, Bernd; KALRA, Dipak; KOEHN, Marc; LUNN, Ken; PHAROW, Peter; RUOTSALAINEN, Pekka; SCHULZ, Stefan; SMITH, Barry

    2016-01-01

    As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. The workshop is a joint activity of the EFMI Working Groups “Electronic Health Records” and “Security, Safety and Ethics”. PMID:19745454

  1. The role of ontologies for sustainable, semantically interoperable and trustworthy EHR solutions.

    PubMed

    Blobel, Bernd; Kalra, Dipak; Koehn, Marc; Lunn, Ken; Pharow, Peter; Ruotsalainen, Pekka; Schulz, Stefan; Smith, Barry

    2009-01-01

    As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. The workshop is a joint activity of the EFMI Working Groups "Electronic Health Records" and "Security, Safety and Ethics". PMID:19745454

  2. Interoperability in Personalized Adaptive Learning

    ERIC Educational Resources Information Center

    Aroyo, Lora; Dolog, Peter; Houben, Geert-Jan; Kravcik, Milos; Naeve, Ambjorn; Nilsson, Mikael; Wild, Fridolin

    2006-01-01

    Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the…

  3. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities.

    PubMed

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-01-01

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695

  4. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities

    PubMed Central

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-01-01

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695

  5. A step-by-step methodology for enterprise interoperability projects

    NASA Astrophysics Data System (ADS)

    Chalmeta, Ricardo; Pazos, Verónica

    2015-05-01

    Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.

  6. Semantic Interoperability for Computational Mineralogy: Experiences of the eMinerals Consortium

    NASA Astrophysics Data System (ADS)

    Walker, A. M.; White, T. O.; Dove, M. T.; Bruin, R. P.; Couch, P. A.; Tyer, R. P.

    2006-12-01

    The use of atomic scale computer simulation of minerals to obtain information for geophysics and environmental science has grown enormously over the past couple of decades. It is now routine to probe mineral behavior in the Earth's deep interior and in the surface environment by borrowing methods and simulation codes from computational chemistry and physics. It is becoming increasingly important to use methods embodied in more than one of these codes to solve any single scientific problem. However, scientific codes are rarely designed for easy interoperability and data exchange; data formats are often code-specific, poorly documented and fragile, liable to frequent change between software versions, and even compiler versions. This means that the scientist's simple desire to use the methodological approaches offered by multiple codes is frustrated, and even the sharing of data between collaborators becomes fraught with difficulties. The eMinerals consortium was formed in the early stages of the UK eScience program with the aim of developing the tools needed to apply atomic scale simulation to environmental problems in a grid-enabled world, and to harness the computational power offered by grid technologies to address some outstanding mineralogical problems. One example of the kind of problem we can tackle is the origin of the compressibility anomaly in silica glass. By passing data directly between simulation and analysis tools we were able to probe this effect in more detail than has previously been possible and have shown how the anomaly is related to the details of the amorphous structure. In order to approach this kind of problem we have constructed a mini-grid, a small scale and extensible combined compute- and data-grid that allows the execution of many calculations in parallel, and the transparent storage of semantically-rich marked-up result data. Importantly, we automatically capture multiple kinds of metadata and key results from each calculation. We

  7. Semantic Interoperability and Dynamic Resource Discovery in P2P Systems

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; de Antonellis, Valeria; Melchiori, Michele

    Service-oriented architectures and Semantic Web technologies are widely recognized as strategic means to enable effective search and access to data and services in P2P systems. In this paper we present SERVANT, a reference architecture to support SERVice-based semANTic search, by means of a semantic distributed service registry. Specifically, SERVANT supports the automatic discovery of services, available in the P2P network, apt to satisfy user's requests for information searches. The SERVANT architecture is based on: a distributed service registry, DSR, composed of semantic-enriched peer registries and semantic links between peer registries holding similar services; a Service Knowledge Evolution Manager, to update peer knowledge; a Semantic Search Assistant, to find services satisfying a user's request, to suggest possible alternative services and to propose possible related services for composition. The proposed architecture allows for efficient semantic search based on service discovery throughout the network and is able to manage P2P network evolution.

  8. An Integrated Framework to Achieve Interoperability in Person-Centric Health Management

    PubMed Central

    Vergari, Fabio; Salmon Cinotti, Tullio; D'Elia, Alfredo; Roffia, Luca; Zamagni, Guido; Lamberti, Claudio

    2011-01-01

    The need for high-quality out-of-hospital healthcare is a known socioeconomic problem. Exploiting ICT's evolution, ad-hoc telemedicine solutions have been proposed in the past. Integrating such ad-hoc solutions in order to cost-effectively support the entire healthcare cycle is still a research challenge. In order to handle the heterogeneity of relevant information and to overcome the fragmentation of out-of-hospital instrumentation in person-centric healthcare systems, a shared and open source interoperability component can be adopted, which is ontology driven and based on the semantic web data model. The feasibility and the advantages of the proposed approach are demonstrated by presenting the use case of real-time monitoring of patients' health and their environmental context. PMID:21811499

  9. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran

    PubMed Central

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E.

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information. PMID:25954452

  10. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran.

    PubMed

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information. PMID:25954452

  11. Community-Driven Initiatives to Achieve Interoperability for Ecological and Environmental Data

    NASA Astrophysics Data System (ADS)

    Madin, J.; Bowers, S.; Jones, M.; Schildhauer, M.

    2007-12-01

    interoperability by describing the semantics of data at the level of observation and measurement (rather than the traditional focus at the level of the data set) and will define the necessary specifications and technologies to facilitate semantic interpretation and integration of observational data for the environmental sciences. As such, this initiative will focus on unifying the various existing approaches for representing and describing observation data (e.g., SEEK's Observation Ontology, CUAHSI's Observation Data Model, NatureServe's Observation Data Standard, to name a few). Products of this initiative will be compatible with existing standards and build upon recent advances in knowledge representation (e.g., W3C's recommended Web Ontology Language, OWL) that have demonstrated practical utility in enhancing scientific communication and data interoperability in other communities (e.g., the genomics community). A community-sanctioned, extensible, and unified model for observational data will support metadata standards such as EML while reducing the "babel" of scientific dialects that currently impede effective data integration, which will in turn provide a strong foundation for enabling cross-disciplinary synthetic research in the ecological and environmental sciences.

  12. The Semantic Management of Environmental Resources within the Interoperable Context of the EuroGEOSS: Alignment of GEMET and the GEOSS SBAs

    NASA Astrophysics Data System (ADS)

    Cialone, Claudia; Stock, Kristin

    2010-05-01

    EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of

  13. Using architectures for semantic interoperability to create journal clubs for emergency response

    SciTech Connect

    Powell, James E; Collins, Linn M; Martinez, Mark L B

    2009-01-01

    In certain types of 'slow burn' emergencies, careful accumulation and evaluation of information can offer a crucial advantage. The SARS outbreak in the first decade of the 21st century was such an event, and ad hoc journal clubs played a critical role in assisting scientific and technical responders in identifying and developing various strategies for halting what could have become a dangerous pandemic. This research-in-progress paper describes a process for leveraging emerging semantic web and digital library architectures and standards to (1) create a focused collection of bibliographic metadata, (2) extract semantic information, (3) convert it to the Resource Description Framework /Extensible Markup Language (RDF/XML), and (4) integrate it so that scientific and technical responders can share and explore critical information in the collections.

  14. An HL7-CDA wrapper for facilitating semantic interoperability to rule-based Clinical Decision Support Systems.

    PubMed

    Sáez, Carlos; Bresó, Adrián; Vicente, Javier; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-03-01

    The success of Clinical Decision Support Systems (CDSS) greatly depends on its capability of being integrated in Health Information Systems (HIS). Several proposals have been published up to date to permit CDSS gathering patient data from HIS. Some base the CDSS data input on the HL7 reference model, however, they are tailored to specific CDSS or clinical guidelines technologies, or do not focus on standardizing the CDSS resultant knowledge. We propose a solution for facilitating semantic interoperability to rule-based CDSS focusing on standardized input and output documents conforming an HL7-CDA wrapper. We define the HL7-CDA restrictions in a HL7-CDA implementation guide. Patient data and rule inference results are mapped respectively to and from the CDSS by means of a binding method based on an XML binding file. As an independent clinical document, the results of a CDSS can present clinical and legal validity. The proposed solution is being applied in a CDSS for providing patient-specific recommendations for the care management of outpatients with diabetes mellitus. PMID:23199936

  15. The Health Service Bus: an architecture and case study in achieving interoperability in healthcare.

    PubMed

    Ryan, Amanda; Eklund, Peter

    2010-01-01

    Interoperability in healthcare is a requirement for effective communication between entities, to ensure timely access to up to-date patient information and medical knowledge, and thus facilitate consistent patient care. An interoperability framework called the Health Service Bus (HSB), based on the Enterprise Service Bus (ESB) middleware software architecture is presented here as a solution to all three levels of interoperability as defined by the HL7 EHR Interoperability Work group in their definitive white paper "Coming to Terms". A prototype HSB system was implemented based on the Mule Open-Source ESB and is outlined and discussed, followed by a clinically-based example. PMID:20841819

  16. Achieving control and interoperability through unified model-based systems and software engineering

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Ingham, Michel; Dvorak, Daniel

    2005-01-01

    Control and interoperation of complex systems is one of the most difficult challenges facing NASA's Exploration Systems Mission Directorate. An integrated but diverse array of vehicles, habitats, and supporting facilities, evolving over the long course of the enterprise, must perform ever more complex tasks while moving steadily away from the sphere of ground support and intervention.

  17. Achieving Interoperability Through Base Registries for Governmental Services and Document Management

    NASA Astrophysics Data System (ADS)

    Charalabidis, Yannis; Lampathaki, Fenareti; Askounis, Dimitris

    As digital infrastructures increase their presence worldwide, following the efforts of governments to provide citizens and businesses with high-quality one-stop services, there is a growing need for the systematic management of those newly defined and constantly transforming processes and electronic documents. E-government Interoperability Frameworks usually cater to the technical standards of e-government systems interconnection, but do not address service composition and use by citizens, businesses, or other administrations.

  18. A cloud-based approach for interoperable electronic health records (EHRs).

    PubMed

    Bahga, Arshdeep; Madisetti, Vijay K

    2013-09-01

    We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security. PMID:25055368

  19. Towards technical interoperability in telemedicine.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    For telemedicine to realize the vision of anywhere, anytime access to care, the question of how to create a fully interoperable technical infrastructure must be addressed. After briefly discussing how 'technical interoperability' compares with other types of interoperability being addressed in the telemedicine community today, this paper describes reasons for pursuing technical interoperability, presents a proposed framework for realizing technical interoperability, identifies key issues that will need to be addressed if technical interoperability is to be achieved, and suggests a course of action that the telemedicine community might follow to accomplish this goal.

  20. Toward technical interoperability in telemedicine.

    PubMed

    Craft, Richard L

    2005-06-01

    For telemedicine to realize the vision of anywhere, anytime access to care, the question of how to create a fully interoperable technical infrastructure must be addressed. After briefly discussing how "technical interoperability" compares with other types of interoperability being addressed in the telemedicine community today, this paper describes reasons for pursuing technical interoperability, presents a proposed framework for realizing technical interoperability, identifies key issues that will need to be addressed if technical interoperability is to be achieved, and suggests a course of action that the telemedicine community might follow to accomplish this goal. PMID:16035933

  1. Modelling and approaching pragmatic interoperability of distributed geoscience data

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang

    2010-05-01

    Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location

  2. Data interchange standards in healthcare IT--computable semantic interoperability: now possible but still difficult, do we really need a better mousetrap?

    PubMed

    Mead, Charles N

    2006-01-01

    The following article on HL7 Version 3 will give readers a glimpse into the significant differences between "what came before"--that is, HL7 Version 2.x--and "what today and the future will bring," which is the HL7 Version 3 family of data interchange specifications. The difference between V2.x and V3 is significant, and it exists because the various stakeholders in the HL7 development process believe that the increased depth, breadth, and, to some degree, complexity that characterize V3 are necessary to solve many of today's and tomorrow's increasingly wide, deep and complex healthcare information data interchange requirements. Like many healthcare or technology discussions, this discussion has its own vocabulary of somewhat obscure, but not difficult, terms. This article will define the minimum set that is necessary for readers to appreciate the relevance and capabilities of HL7 Version 3, including how it is different than HL7 Version 2. After that, there will be a brief overview of the primary motivations for HL7 Version 3 in the presence of the unequivocal success of Version 2. In this context, the article will give readers an overview of one of the prime constructs of Version 3, the Reference Information Model (RIM). There are 'four pillars that are necessary but not sufficient to obtain computable semantic interoperability." These four pillars--a cross-domain information model; a robust data type specification; a methodology for separating domain-specific terms from, as well as binding them to, the common model; and a top-down interchange specification methodology and tools for using 1, 2, 3 and defining Version 3 specification--collectively comprise the "HL7 Version 3 Toolkit." Further, this article will present a list of questions and answers to help readers assess the scope and complexity of the problems facing healthcare IT today, and which will further enlighten readers on the "reality" of HL7 Version 3. The article will conclude with a "pseudo

  3. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities

    PubMed Central

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-01-01

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a “system of systems” could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965

  4. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities.

    PubMed

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-01-01

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a "system of systems" could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965

  5. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  6. A Study of the Semantic Differential Based on Motivational Concepts as a Technique for Predicting Student Achievement.

    ERIC Educational Resources Information Center

    Sizemore, Oral Glen

    The purpose of this study was to develop a semantic differential scale based on achievement motivation concepts by which grade point averages could be predicted. A scale was constructed and administered to 944 freshmen at Northeastern State College in Fall 1967. Two approaches were used. One was to combine semantic differential scale scores…

  7. Interoperability and information discovery

    USGS Publications Warehouse

    Christian, E.

    2001-01-01

    In the context of information systems, there is interoperability when the distinctions between separate information systems are not a barrier to accomplishing a task that spans those systems. Interoperability so defined implies that there are commonalities among the systems involved and that one can exploit such commonalities to achieve interoperability. The challenge of a particular interoperability task is to identify relevant commonalities among the systems involved and to devise mechanisms that exploit those commonalities. The present paper focuses on the particular interoperability task of information discovery. The Global Information Locator Service (GILS) is described as a policy, standards, and technology framework for addressing interoperable information discovery on a global and long-term basis. While there are many mechanisms for people to discover and use all manner of data and information resources, GILS initiatives exploit certain key commonalities that seem to be sufficient to realize useful information discovery interoperability at a global, long-term scale. This paper describes ten of the specific commonalities that are key to GILS initiatives. It presents some of the practical implications for organizations in various roles: content provider, system engineer, intermediary, and searcher. The paper also provides examples of interoperable information discovery as deployed using GILS in four types of information communities: bibliographic, geographic, environmental, and government.

  8. Designing Information Interoperability

    SciTech Connect

    Gorman, Bryan L.; Shankar, Mallikarjun; Resseguie, David R.

    2009-01-01

    Examples of incompatible systems are offered with a discussion of the relationship between incompatibility and innovation. Engineering practices and the role of standards are reviewed as a means of resolving issues of incompatibility, with particular attention to the issue of innovation. Loosely-coupled systems are described as a means of achieving and sustaining both interoperability and innovation in heterogeneous environments. A virtual unifying layer, in terms of a standard, a best practice, and a methodology, is proposed as a modality for designing information interoperability for enterprise applicaitons. The Uniform Resource Identifier (URI), microformats, and Joshua Porter s AOF Method are described and presented as solutions for designing interoperable information sharing web sites. The Special Operations Force Information Access (SOFIA), a mock design, is presented as an example of information interoperability.

  9. Software interoperability for energy simulation

    SciTech Connect

    Hitchcock, Robert J.

    2002-07-31

    This paper provides an overview of software interoperability as it relates to the energy simulation of buildings. The paper begins with a discussion of the difficulties in using sophisticated analysis tools like energy simulation at various stages in the building life cycle, and the potential for interoperability to help overcome these difficulties. An overview of the Industry Foundation Classes (IFC), a common data model for supporting interoperability under continuing development by the International Alliance for Interoperability (IAI) is then given. The process of creating interoperable software is described next, followed by specific details for energy simulation tools. The paper closes with the current status of, and future plans for, the ongoing efforts to achieve software interoperability.

  10. Towards a semantic lexicon for clinical natural language processing.

    PubMed

    Liu, Hongfang; Wu, Stephen T; Li, Dingcheng; Jonnalagadda, Siddhartha; Sohn, Sunghwan; Wagholikar, Kavishwar; Haug, Peter J; Huff, Stanley M; Chute, Christopher G

    2012-01-01

    A semantic lexicon which associates words and phrases in text to concepts is critical for extracting and encoding clinical information in free text and therefore achieving semantic interoperability between structured and unstructured data in Electronic Health Records (EHRs). Directly using existing standard terminologies may have limited coverage with respect to concepts and their corresponding mentions in text. In this paper, we analyze how tokens and phrases in a large corpus distribute and how well the UMLS captures the semantics. A corpus-driven semantic lexicon, MedLex, has been constructed where the semantics is based on the UMLS assisted with variants mined and usage information gathered from clinical text. The detailed corpus analysis of tokens, chunks, and concept mentions shows the UMLS is an invaluable source for natural language processing. Increasing the semantic coverage of tokens provides a good foundation in capturing clinical information comprehensively. The study also yields some insights in developing practical NLP systems. PMID:23304329

  11. Interoperation of heterogeneous CAD tools in Ptolemy II

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.

    1999-03-01

    Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.

  12. Lemnos Interoperable Security Program

    SciTech Connect

    Stewart, John; Halbgewachs, Ron; Chavez, Adrian; Smith, Rhett; Teumim, David

    2012-01-31

    The manner in which the control systems are being designed and operated in the energy sector is undergoing some of the most significant changes in history due to the evolution of technology and the increasing number of interconnections to other system. With these changes however come two significant challenges that the energy sector must face; 1) Cyber security is more important than ever before, and 2) Cyber security is more complicated than ever before. A key requirement in helping utilities and vendors alike in meeting these challenges is interoperability. While interoperability has been present in much of the discussions relating to technology utilized within the energy sector and especially the Smart Grid, it has been absent in the context of cyber security. The Lemnos project addresses these challenges by focusing on the interoperability of devices utilized within utility control systems which support critical cyber security functions. In theory, interoperability is possible with many of the cyber security solutions available to utilities today. The reality is that the effort required to achieve cyber security interoperability is often a barrier for utilities. For example, consider IPSec, a widely-used Internet Protocol to define Virtual Private Networks, or tunnels , to communicate securely through untrusted public and private networks. The IPSec protocol suite has a significant number of configuration options and encryption parameters to choose from, which must be agreed upon and adopted by both parties establishing the tunnel. The exercise in getting software or devices from different vendors to interoperate is labor intensive and requires a significant amount of security expertise by the end user. Scale this effort to a significant number of devices operating over a large geographical area and the challenge becomes so overwhelming that it often leads utilities to pursue solutions from a single vendor. These single vendor solutions may inadvertently lock

  13. Web Feature Service Semantic Mediation

    NASA Astrophysics Data System (ADS)

    Hobona, G.; Bermudez, L. E.; Brackin, R.; Percivall, G. S.

    2012-12-01

    Scientists from different organizations and disciplines need to work together to find the solutions to complex problems. Multi-disciplinary science typically involves users with specialized tools and their own preferred view of the data including unique characteristics of the user's information model and symbology. Even though organizations use web services to expose data, there are still semantic inconsistencies that need to be solved. Recent activities within the OGC Interoperability Program (IP) have helped advance semantic mediation solutions when using OGC services to help solve complex problems. The OGC standards development process is influenced by the feedback of activities within the Interoperability Program, which conducts international interoperability initiatives such as Testbeds, Pilot Projects, Interoperability Experiments, and Interoperability Support Services. These activities are designed to encourage rapid development, testing, validation, demonstration and adoption of open, consensus based standards and best practices. Two recent Testbeds, the OGC Web Services Phase 8 and Phase 9, have advanced the use of semantic mediation approaches to increase semantic interoperability among geospatial communities. The Cross-Community Interoperability (CCI) thread within these two testbeds, advanced semantic mediation approaches for data discovery, access and use of heterogeneous data models and heterogeneous metadata models. This presentation will provide an overview of the interoperability program, the CCI Thread and will explain the methodology to mediate heterogeneous GML Application Profiles served via WFS, including discovery of services via a catalog standard interface and mediating symbology applicable to each application profile.

  14. A Semantic Web Service and Simulation Framework to Intelligent Distributed Manufacturing

    SciTech Connect

    Son, Young Jun; Kulvatunyou, Boonserm; Cho, Hyunbo; Feng, Shaw

    2005-11-01

    To cope with today's fluctuating markets, a virtual enterprise (VE) concept can be employed to achieve the cooperation among independently operating enterprises. The success of VE depends on reliable interoperation among trading partners. This paper proposes a framework based on semantic web of manufacturing and simulation services to enable business and engineering collaborations between VE partners, particularly a design house and manufacturing suppliers.

  15. Improving Interoperability in ePrescribing

    PubMed Central

    Åstrand, Bengt; Petersson, Göran

    2012-01-01

    Background The increased application of eServices in health care, in general, and ePrescribing (electronic prescribing) in particular, have brought quality and interoperability to the forefront. The application of standards has been put forward as one important factor in improving interoperability. However, less focus has been placed on other factors, such as stakeholders’ involvement and the measurement of interoperability. An information system (IS) can be regarded to comprise an instrument for technology-mediated work communication. In this study, interoperability refers to the interoperation in the ePrescribing process, involving people, systems, procedures and organizations. We have focused on the quality of the ePrescription message as one component of the interoperation in the ePrescribing process. Objective The objective was to analyze how combined efforts in improving interoperability with the introduction of the new national ePrescription format (NEF) have impacted interoperability in the ePrescribing process in Sweden, with the focus on the quality of the ePrescription message. Methods Consecutive sampling of electronic prescriptions in Sweden before and after the introduction of NEF was undertaken in April 2008 (pre-NEF) and April 2009 (post-NEF). Interoperability problems were identified and classified based on message format specifications and prescription rules. Results The introduction of NEF improved the interoperability of ePrescriptions substantially. In the pre-NEF sample, a total of 98.6% of the prescriptions had errors. In the post-NEF sample, only 0.9% of the prescriptions had errors. The mean number of errors was fewer for the erroneous prescriptions: 4.8 in pre-NEF compared to 1.0 in post-NEF. Conclusions We conclude that a systematic comprehensive work on interoperability, covering technical, semantical, professional, judicial and process aspects, involving the stakeholders, resulted in an improved interoperability of e

  16. HeartDrive: A Broader Concept of Interoperability to Implement Care Processes for Heart Failure.

    PubMed

    Lettere, M; Guerri, D; La Manna, S; Groccia, M C; Lofaro, D; Conforti, D

    2016-01-01

    This paper originates from the HeartDrive project, a platform of services for a more effective, efficient and integrated management of heart failure and comorbidities. HeartDrive establishes a cooperative approach based on the concepts of continuity of care and extreme, patient oriented, customization of diagnostic, therapeutic and follow-up procedures. Definition and development of evidence based processes, migration from parceled and episode based healthcare provisioning to a workflow oriented model and increased awareness and responsibility of citizens towards their own health and wellness are key objectives of HeartDrive. In two scenarios for rehabilitation and home monitoring we show how the results are achieved by providing a solution that highlights a broader concept of cooperation that goes beyond technical interoperability towards semantic interoperability explicitly sharing process definitions, decision support strategies and information semantics. PMID:27225572

  17. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard

    PubMed Central

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong

    2014-01-01

    Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817

  18. SDI-Based Groundwater Information Interoperability

    NASA Astrophysics Data System (ADS)

    Brodaric, B.; Boisvert, E.

    2007-12-01

    Though groundwater data are important inputs to hydrologic decision-making, they are highly distributed and heterogeneous, and thus difficult to access in a coordinated manner. The Geological Survey of Canada (GSC) is developing an information system for coordinated groundwater data access, using the standards and technologies of Spatial Data Infrastructures (SDI). In mid-stage development, the system is designed to manage and disseminate data produced by GSC scientists, as well as potentially disseminate data produced by other groundwater agencies. The system involves a typical three-tiered, mediator-wrapper architecture that includes a data tier, a mediator tier, and an applications tier. At the data tier local data sources are wrapped by OGC web services (WFS, WMS), which deliver diversely structured data to the mediator tier. The mediator tier acts as: (1) a central registry for the distributed data and other services; (2) a translator of the local data to the standard data format, GroundWater Markup Language; and (3) a consistent set of OGC web services that enable users to access the distributed data as one source. The applications tier involves both GSC and third-party web applications, such as analysis tools or on-line atlases, that provide user interfaces to the system. Apart from the data format standards used to achieve schematic interoperability, the system also deploys some light-weight data content standards to move toward semantic interoperability. These content standards include the definition of common categories for datasets such as standard subject classifications and map layers. A demonstration of the working prototype will be available, as well as discussion of the architecture of the system and the impacts on interoperability. The intent of the development is to grow the system into a national enterprise with a broad range of contributors and users.

  19. Turning Interoperability Operational with GST

    NASA Astrophysics Data System (ADS)

    Schaeben, Helmut; Gabriel, Paul; Gietzel, Jan; Le, Hai Ha

    2013-04-01

    GST - Geosciences in space and time is being developed and implemented as hub to facilitate the exchange of spatially and temporally indexed multi-dimensional geoscience data and corresponding geomodels amongst partners. It originates from TUBAF's contribution to the EU project "ProMine" and its perspective extensions are TUBAF's contribution to the actual EU project "GeoMol". As of today, it provides basic components of a geodata infrastructure as required to establish interoperability with respect to geosciences. Generally, interoperability means the facilitation of cross-border and cross-sector information exchange, taking into account legal, organisational, semantic and technical aspects, cf. Interoperability Solutions for European Public Administrations (ISA), cf. http://ec.europa.eu/isa/. Practical interoperability for partners of a joint geoscience project, say European Geological Surveys acting in a border region, means in particular provision of IT technology to exchange spatially and maybe additionally temporally indexed multi-dimensional geoscience data and corresponding models, i.e. the objects composing geomodels capturing the geometry, topology, and various geoscience contents. Geodata Infrastructure (GDI) and interoperability are objectives of several inititatives, e.g. INSPIRE, OneGeology-Europe, and most recently EGDI-SCOPE to name just the most prominent ones. Then there are quite a few markup languages (ML) related to geographical or geological information like GeoSciML, EarthResourceML, BoreholeML, ResqML for reservoir characterization, earth and reservoir models, and many others featuring geoscience information. Several Web Services are focused on geographical or geoscience information. The Open Geospatial Consortium (OGC) promotes specifications of a Web Feature Service (WFS), a Web Map Service (WMS), a Web Coverage Serverice (WCS), a Web 3D Service (W3DS), and many more. It will be clarified how GST is related to these initiatives, especially

  20. Clinical data interoperability based on archetype transformation.

    PubMed

    Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-10-01

    The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation. PMID:21645637

  1. The GEOSS solution for enabling data interoperability and integrative research.

    PubMed

    Nativi, Stefano; Mazzetti, Paolo; Craglia, Max; Pirrone, Nicola

    2014-03-01

    Global sustainability research requires an integrative research effort underpinned by digital infrastructures (systems) able to harness data and heterogeneous information across disciplines. Digital data and information sharing across systems and applications is achieved by implementing interoperability: a property of a product or system to work with other products or systems, present or future. There are at least three main interoperability challenges a digital infrastructure must address: technological, semantic, and organizational. In recent years, important international programs and initiatives are focusing on such an ambitious objective. This manuscript presents and combines the studies and the experiences carried out by three relevant projects, focusing on the heavy metal domain: Global Mercury Observation System, Global Earth Observation System of Systems (GEOSS), and INSPIRE. This research work recognized a valuable interoperability service bus (i.e., a set of standards models, interfaces, and good practices) proposed to characterize the integrative research cyber-infrastructure of the heavy metal research community. In the paper, the GEOSS common infrastructure is discussed implementing a multidisciplinary and participatory research infrastructure, introducing a possible roadmap for the heavy metal pollution research community to join GEOSS as a new Group on Earth Observation community of practice and develop a research infrastructure for carrying out integrative research in its specific domain. PMID:24243262

  2. Semantic Research for Digital Libraries.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    1999-01-01

    Discusses the need for semantic research in digital libraries to help overcome interoperability problems. Highlights include federal initiatives; semantic analysis; knowledge representations; human-computer interactions and information visualization; and the University of Illinois DLI (Digital Libraries Initiative) project through partnership with…

  3. A methodology for the development of software agent based interoperable telemedicine systems: a tele-electrocardiography perspective.

    PubMed

    Ganguly, P; Ray, P

    2000-01-01

    Telemedicine involves the integration of information, human-machine, and healthcare technologies. Because different modalities of patient care require applications running on heterogeneous computing environment, software interoperability is a major issue in telemedicine. Software agent technology provides a range of promising techniques to solve this problem. This article discusses the development of a methodology for the design of interoperable telemedicine systems (illustrated with a tele-electrocardiography application). Software interoperability between different applications can be modeled at different levels of abstraction such as physical interoperability, data-type interoperability, specification-level interoperability, and semantic interoperability. Software agents address the issue of software interoperability at semantic level. A popular object-oriented software development methodology - unified modeling language (UML) - has been used for this development. This research has demonstrated the feasibility of the development of agent-based interoperable telemedicine systems. More research is needed before widespread deployment of such systems can take place. PMID:10957742

  4. Challenges of Space Mission Interoperability

    NASA Technical Reports Server (NTRS)

    Martin, Warren L.; Hooke, Adrian J.

    2007-01-01

    This viewgraph presentation reviews some of the international challenges to space mission interoperability. Interoperability is the technical capability of two or more systems or components to exchange information and to use the information that has been exchanged. One of the challenges that is addressed is the problem of spectrum bandwidth, and interference. The key to interoperability is the standardization of space communications services and protocols. Various levels of international cross support are reviewed: harmony, cooperation cross support and confederation cross support. The various international bodies charged with implementing cross support are reviewed. The goal of the Interagency Operations Advisory Group (IOAG) is to achieve plug-and-play operations where all that is required is for each of the systems to use an agreed communications medium, after which the systems configure each other for the purpose of exchanging information and subsequently effect such exchange automatically.

  5. Combining Archetypes with Fast Health Interoperability Resources in Future-proof Health Information Systems.

    PubMed

    Bosca, Diego; Moner, David; Maldonado, Jose Alberto; Robles, Montserrat

    2015-01-01

    Messaging standards, and specifically HL7 v2, are heavily used for the communication and interoperability of Health Information Systems. HL7 FHIR was created as an evolution of the messaging standards to achieve semantic interoperability. FHIR is somehow similar to other approaches like the dual model methodology as both are based on the precise modeling of clinical information. In this paper, we demonstrate how we can apply the dual model methodology to standards like FHIR. We show the usefulness of this approach for data transformation between FHIR and other specifications such as HL7 CDA, EN ISO 13606, and openEHR. We also discuss the advantages and disadvantages of defining archetypes over FHIR, and the consequences and outcomes of this approach. Finally, we exemplify this approach by creating a testing data server that supports both FHIR resources and archetypes. PMID:25991126

  6. Controlled Vocabularies, Mini Ontologies and Interoperability (Invited)

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Roberts, D.; Thieman, J.; Ritschel, B.; Cecconi, B.; Genot, V. N.

    2013-12-01

    Interoperability has been an elusive goal, but in recent years advances have been made using controlled vocabularies, mini-ontologies and a lot of collaboration. This has led to increased interoperability between disciplines in the U.S. and between international projects. We discuss the successful pattern followed by SPASE, IVOA and IPDA to achieve this new level of international interoperability. A key aspect of the pattern is open standards and open participation with interoperability achieved with shared services, public APIs, standard formats and open access to data. Many of these standards are expressed as controlled vocabularies and mini ontologies. To illustrate the pattern we look at SPASE related efforts and participation of North America's Heliophysics Data Environment and CDPP; Europe's Cluster Active Archive, IMPEx, EuroPlanet, ESPAS and HELIO; and Japan's magnetospheric missions. Each participating project has its own life cycle and successful standards development must always take this into account. A major challenge for sustained collaboration and interoperability is the limited lifespan of many of the participating projects. Innovative approaches and new tools and frameworks are often developed as competitively selected, limited term projects, but for sustainable interoperability successful approaches need to become part of a long term infrastructure. This is being encouraged and achieved in many domains and we are entering a golden age of interoperability.

  7. Rationale and design considerations for a semantic mediator in health information systems.

    PubMed

    Degoulet, P; Sauquet, D; Jaulent, M C; Zapletal, E; Lavril, M

    1998-11-01

    Rapid development of community health information networks raises the issue of semantic interoperability between distributed and heterogeneous systems. Indeed, operational health information systems originate from heterogeneous teams of independent developers and have to cooperate in order to exchange data and services. A good cooperation is based on a good understanding of the messages exchanged between the systems. The main issue of semantic interoperability is to ensure that the exchange is not only possible but also meaningful. The main objective of this paper is to analyze semantic interoperability from a software engineering point of view. It describes the principles for the design of a semantic mediator (SM) in the framework of a distributed object manager (DOM). The mediator is itself a component that should allow the exchange of messages independently of languages and platforms. The functional architecture of such a SM is detailed. These principles have been partly applied in the context of the HELIOS object-oriented software engineering environment. The resulting service components are presented with their current state of achievement. PMID:9865050

  8. Semantic Sensor Web

    NASA Astrophysics Data System (ADS)

    Sheth, A.; Henson, C.; Thirunarayan, K.

    2008-12-01

    Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the Semantic Sensor Web (SSW) [1] proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. Kno.e.sis Center's approach to SSW is an evolutionary one. It adds semantic annotations to the existing standard sensor languages of the Sensor Web Enablement (SWE) defined by OGC. These annotations enhance primarily syntactic XML-based descriptions in OGC's SWE languages with microformats, and W3C's Semantic Web languages- RDF and OWL. In association with semantic annotation and semantic web capabilities including ontologies and rules, SSW supports interoperability, analysis and reasoning over heterogeneous multi-modal sensor data. In this presentation, we will also demonstrate a mashup with support for complex spatio-temporal-thematic queries [2] and semantic analysis that utilize semantic annotations, multiple ontologies and rules. It uses existing services (e.g., GoogleMap) and semantics enhanced SWE's Sensor Observation Service (SOS) over weather and road condition data from various sensors that are part of Ohio's transportation network. Our upcoming plans are to demonstrate end to end (heterogeneous sensor to application) semantics support and study scalability of SSW involving thousands of sensors to about a billion triples. Keywords: Semantic Sensor Web, Spatiotemporal thematic queries, Semantic Web Enablement, Sensor Observation Service [1] Amit Sheth, Cory Henson, Satya

  9. Standards-based data interoperability in the climate sciences

    NASA Astrophysics Data System (ADS)

    Woolf, Andrew; Cramer, Ray; Gutierrez, Marta; Kleese van Dam, Kerstin; Kondapalli, Siva; Latham, Susan; Lawrence, Bryan; Lowry, Roy; O'Neill, Kevin

    2005-03-01

    Emerging developments in geographic information systems and distributed computing offer a roadmap towards an unprecedented spatial data infrastructure in the climate sciences. Key to this are the standards developments for digital geographic information being led by the International Organisation for Standardisation (ISO) technical committee on geographic information/geomatics (TC211) and the Open Geospatial Consortium (OGC). These, coupled with the evolution of standardised web services for applications on the internet by the World Wide Web Consortium (W3C), mean that opportunities for both new applications and increased interoperability exist. These are exemplified by the ability to construct ISO-compliant data models that expose legacy data sources through OGC web services. This paper concentrates on the applicability of these standards to climate data by introducing some examples and outlining the challenges ahead. An abstract data model is developed, based on ISO standards, and applied to a range of climate data both observational and modelled. An OGC Web Map Server interface is constructed for numerical weather prediction (NWP) data stored in legacy data files. A W3C web service for remotely accessing gridded climate data is illustrated. Challenges identified include the following: first, both the ISO and OGC specifications require extensions to support climate data. Secondly, OGC services need to fully comply with W3C web services, and support complex access control. Finally, to achieve real interoperability, broadly accepted community-based semantic data models are required across the range of climate data types. These challenges are being actively pursued, and broad data interoperability for the climate sciences appears within reach.

  10. The Semantic SPASE

    NASA Astrophysics Data System (ADS)

    Hughes, S.; Crichton, D.; Thieman, J.; Ramirez, P.; King, T.; Weiss, M.

    2005-12-01

    The Semantic SPASE (Space Physics Archive Search and Extract) prototype demonstrates the use of semantic web technologies to capture, document, and manage the SPASE data model, support facet- and text-based search, and provide flexible and intuitive user interfaces. The SPASE data model, under development since late 2003 by a consortium of space physics domain experts, is intended to serve as the basis for interoperability between independent data systems. To develop the Semantic SPASE prototype, the data model was first analyzed to determine the inherit object classes and their attributes. These were entered into Stanford Medical Informatics' Protege ontology tool and annotated using definitions from the SPASE documentation. Further analysis of the data model resulted in the addition of class relationships. Finally attributes and relationships that support broad-scope interoperability were added from research associated with the Object-Oriented Data Technology task. To validate the ontology and produce a knowledge base, example data products were ingested. The capture of the data model as an ontology results in a more formal specification of the model. The Protege software is also a powerful management tool and supports plug-ins that produce several graphical notations as output. The stated purpose of the semantic web is to support machine understanding of web-based information. Protege provides an export capability to RDF/XML and RDFS/XML for this purpose. Several research efforts use RDF/XML knowledge bases to provide semantic search. MIT's Simile/Longwell project provides both facet- and text-based search using a suite of metadata browsers and the text-based search engine Lucene. Using the Protege generated RDF knowledge-base a semantic search application was easily built and deployed to run as a web application. Configuration files specify the object attributes and values to be designated as facets (i.e. search) constraints. Semantic web technologies provide

  11. Supporting spatial data harmonization process with the use of ontologies and Semantic Web technologies

    NASA Astrophysics Data System (ADS)

    Strzelecki, M.; Iwaniak, A.; Łukowicz, J.; Kaczmarek, I.

    2013-10-01

    Nowadays, spatial information is not only used by professionals, but also by common citizens, who uses it for their daily activities. Open Data initiative states that data should be freely and unreservedly available for all users. It also applies to spatial data. As spatial data becomes widely available it is essential to publish it in form which guarantees the possibility of integrating it with other, heterogeneous data sources. Interoperability is the possibility to combine spatial data sets from different sources in a consistent way as well as providing access to it. Providing syntactic interoperability based on well-known data formats is relatively simple, unlike providing semantic interoperability, due to the multiple possible data interpretation. One of the issues connected with the problem of achieving interoperability is data harmonization. It is a process of providing access to spatial data in a representation that allows combining it with other harmonized data in a coherent way by using a common set of data product specification. Spatial data harmonization is performed by creating definition of reclassification and transformation rules (mapping schema) for source application schema. Creation of those rules is a very demanding task which requires wide domain knowledge and a detailed look into application schemas. The paper focuses on proposing methods for supporting data harmonization process, by automated or supervised creation of mapping schemas with the use of ontologies, ontology matching methods and Semantic Web technologies.

  12. The Relationship Between Responses to Science Concepts on a Semantic Differential Instrument and Achievement in Freshman Physics and Chemistry.

    ERIC Educational Resources Information Center

    Rothman, Arthur Israel

    Students taking freshman physics and freshman chemistry at The State University of New York at Buffalo (SUNYAB) were administered a science-related semantic differential instrument. This same test was administered to physics and chemistry graduate students from SUNYAB and the University of Rochester. A scoring procedure was developed which…

  13. Towards an interoperable International Lattice Datagrid

    SciTech Connect

    G. Beckett; P. Coddington; N. Ishii; B. Joo; D. Melkumyan; R. Ostrowski; D. Pleiter; M. Sato; J. Simone; C. Watson; S. Zhang

    2007-11-01

    The International Lattice Datagrid (ILDG) is a federation of several regional grids. Since most of these grids have reached production level, an increasing number of lattice scientists start to benefit from this new research infrastructure. The ILDG Middleware Working Group has the task of specifying the ILDG middleware such that interoperability among the different grids is achieved. In this paper we will present the architecture of the ILDG middleware and describe what has actually been achieved in recent years. Particular focus is given to interoperability and security issues. We will conclude with a short overview on issues which we plan to address in the near future.

  14. Personalized-Detailed Clinical Model for Data Interoperability Among Clinical Standards

    PubMed Central

    Khan, Wajahat Ali; Hussain, Maqbool; Afzal, Muhammad; Amin, Muhammad Bilal; Saleem, Muhammad Aamir

    2013-01-01

    Abstract Objective: Data interoperability among health information exchange (HIE) systems is a major concern for healthcare practitioners to enable provisioning of telemedicine-related services. Heterogeneity exists in these systems not only at the data level but also among different heterogeneous healthcare standards with which these are compliant. The relationship between healthcare organization data and different heterogeneous standards is necessary to achieve the goal of data level interoperability. We propose a personalized-detailed clinical model (P-DCM) approach for the generation of customized mappings that creates the necessary linkage between organization-conformed healthcare standards concepts and clinical model concepts to ensure data interoperability among HIE systems. Materials and Methods: We consider electronic health record (EHR) standards, openEHR, and HL7 CDA instances transformation using P-DCM. P-DCM concepts associated with openEHR and HL7 CDA help in transformation of instances among these standards. We investigated two datasets: (1) data of 100 diabetic patients, including 50 each of type 1 and type 2, from a local hospital in Korea and (2) data of a single Alzheimer's disease patient. P-DCMs were created for both scenarios, which provided the basis for deriving instances for HL7 CDA and openEHR standards. Results: For proof of concept, we present case studies of encounter information for type 2 diabetes mellitus patients and monitoring of daily routine activities of an Alzheimer's disease patient. These reflect P-DCM-based customized mappings generation with openEHR and HL7 CDA standards. Customized mappings are generated based on the relationship of P-DCM concepts with CDA and openEHR concepts. Conclusions: The objective of this work is to achieve semantic data interoperability among heterogeneous standards. This would lead to effective utilization of resources and allow timely information exchange among healthcare systems. PMID:23875730

  15. Buildings Interoperability Landscape

    SciTech Connect

    Hardin, Dave; Stephan, Eric G.; Wang, Weimin; Corbin, Charles D.; Widergren, Steven E.

    2015-12-31

    Through its Building Technologies Office (BTO), the United States Department of Energy’s Office of Energy Efficiency and Renewable Energy (DOE-EERE) is sponsoring an effort to advance interoperability for the integration of intelligent buildings equipment and automation systems, understanding the importance of integration frameworks and product ecosystems to this cause. This is important to BTO’s mission to enhance energy efficiency and save energy for economic and environmental purposes. For connected buildings ecosystems of products and services from various manufacturers to flourish, the ICT aspects of the equipment need to integrate and operate simply and reliably. Within the concepts of interoperability lie the specification, development, and certification of equipment with standards-based interfaces that connect and work. Beyond this, a healthy community of stakeholders that contribute to and use interoperability work products must be developed. On May 1, 2014, the DOE convened a technical meeting to take stock of the current state of interoperability of connected equipment and systems in buildings. Several insights from that meeting helped facilitate a draft description of the landscape of interoperability for connected buildings, which focuses mainly on small and medium commercial buildings. This document revises the February 2015 landscape document to address reviewer comments, incorporate important insights from the Buildings Interoperability Vision technical meeting, and capture thoughts from that meeting about the topics to be addressed in a buildings interoperability vision. In particular, greater attention is paid to the state of information modeling in buildings and the great potential for near-term benefits in this area from progress and community alignment.

  16. MENTOR: an enabler for interoperable intelligent systems

    NASA Astrophysics Data System (ADS)

    Sarraipa, João; Jardim-Goncalves, Ricardo; Steiger-Garcao, Adolfo

    2010-07-01

    A community with knowledge organisation based on ontologies will enable an increase in the computational intelligence of its information systems. However, due to the worldwide diversity of communities, a high number of knowledge representation elements, which are not semantically coincident, have appeared representing the same segment of reality, becoming a barrier to business communications. Even if a domain community uses the same kind of technologies in its information systems, such as ontologies, it doesn't solve its semantics differences. In order to solve this interoperability problem, a solution is to use a reference ontology as an intermediary in the communications between the community enterprises and the outside, while allowing the enterprises to keep their own ontology and semantics unchanged internally. This work proposes MENTOR, a methodology to support the development of a common reference ontology for a group of organisations sharing the same business domain. This methodology is based on the mediator ontology (MO) concept, which assists the semantic transformations among each enterprise's ontology and the referential one. The MO enables each organisation to keep its own terminology, glossary and ontological structures, while providing seamless communication and interaction with the others.

  17. Architecture for interoperable software in biology

    PubMed Central

    Baliga, Nitin S.

    2014-01-01

    Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures—list, matrix, network, table and tuple—that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization. PMID:23235920

  18. Toward interoperable bioscience data

    PubMed Central

    Sansone, Susanna-Assunta; Rocca-Serra, Philippe; Field, Dawn; Maguire, Eamonn; Taylor, Chris; Hofmann, Oliver; Fang, Hong; Neumann, Steffen; Tong, Weida; Amaral-Zettler, Linda; Begley, Kimberly; Booth, Tim; Bougueleret, Lydie; Burns, Gully; Chapman, Brad; Clark, Tim; Coleman, Lee-Ann; Copeland, Jay; Das, Sudeshna; de Daruvar, Antoine; de Matos, Paula; Dix, Ian; Edmunds, Scott; Evelo, Chris T; Forster, Mark J; Gaudet, Pascale; Gilbert, Jack; Goble, Carole; Griffin, Julian L; Jacob, Daniel; Kleinjans, Jos; Harland, Lee; Haug, Kenneth; Hermjakob, Henning; Ho Sui, Shannan J; Laederach, Alain; Liang, Shaoguang; Marshall, Stephen; McGrath, Annette; Merrill, Emily; Reilly, Dorothy; Roux, Magali; Shamu, Caroline E; Shang, Catherine A; Steinbeck, Christoph; Trefethen, Anne; Williams-Jones, Bryn; Wolstencroft, Katherine; Xenarios, Ioannis; Hide, Winston

    2012-01-01

    To make full use of research data, the bioscience community needs to adopt technologies and reward mechanisms that support interoperability and promote the growth of an open ‘data commoning’ culture. Here we describe the prerequisites for data commoning and present an established and growing ecosystem of solutions using the shared ‘Investigation-Study-Assay’ framework to support that vision. PMID:22281772

  19. Empowering open systems through cross-platform interoperability

    NASA Astrophysics Data System (ADS)

    Lyke, James C.

    2014-06-01

    Most of the motivations for open systems lie in the expectation of interoperability, sometimes referred to as "plug-and-play". Nothing in the notion of "open-ness", however, guarantees this outcome, which makes the increased interest in open architecture more perplexing. In this paper, we explore certain themes of open architecture. We introduce the concept of "windows of interoperability", which can be used to align disparate portions of architecture. Such "windows of interoperability", which concentrate on a reduced set of protocol and interface features, might achieve many of the broader purposes assigned as benefits in open architecture. Since it is possible to engineer proprietary systems that interoperate effectively, this nuanced definition of interoperability may in fact be a more important concept to understand and nurture for effective systems engineering and maintenance.

  20. The semantic web in translational medicine: current applications and future directions

    PubMed Central

    Machado, Catia M.; Rebholz-Schuhmann, Dietrich; Freitas, Ana T.; Couto, Francisco M.

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933

  1. The semantic web in translational medicine: current applications and future directions.

    PubMed

    Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933

  2. Model and Interoperability using Meta Data Annotations

    NASA Astrophysics Data System (ADS)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and

  3. Maturity model for enterprise interoperability

    NASA Astrophysics Data System (ADS)

    Guédria, Wided; Naudet, Yannick; Chen, David

    2015-01-01

    Historically, progress occurs when entities communicate, share information and together create something that no one individually could do alone. Moving beyond people to machines and systems, interoperability is becoming a key factor of success in all domains. In particular, interoperability has become a challenge for enterprises, to exploit market opportunities, to meet their own objectives of cooperation or simply to survive in a growing competitive world where the networked enterprise is becoming a standard. Within this context, many research works have been conducted over the past few years and enterprise interoperability has become an important area of research, ensuring the competitiveness and growth of European enterprises. Among others, enterprises have to control their interoperability strategy and enhance their ability to interoperate. This is the purpose of the interoperability assessment. Assessing interoperability maturity allows a company to know its strengths and weaknesses in terms of interoperability with its current and potential partners, and to prioritise actions for improvement. The objective of this paper is to define a maturity model for enterprise interoperability that takes into account existing maturity models while extending the coverage of the interoperability domain. The assessment methodology is also presented. Both are demonstrated with a real case study.

  4. National Flood Interoperability Experiment

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.

    2014-12-01

    The National Flood Interoperability Experiment is led by the academic community in collaboration with the National Weather Service through the new National Water Center recently opened on the Tuscaloosa campus of the University of Alabama. The experiment will also involve the partners in IWRSS (Integrated Water Resources Science and Services), which include the USGS, the Corps of Engineers and FEMA. The experiment will address the following questions: (1) How can near-real-time hydrologic forecasting at high spatial resolution, covering the nation, be carried out using the NHDPlus or next generation geofabric (e.g. hillslope, watershed scales)? (2) How can this lead to improved emergency response and community resilience? (3) How can improved an improved interoperability framework support the first two goals and lead to sustained innovation in the research to operations process? The experiment will run from September 2014 through August 2015, in two phases. The mobilization phase from September 2014 until May 2015 will assemble the components of the interoperability framework. A Summer Institute to integrate the components will be held from June to August 2015 at the National Water Center involving faculty and students from the University of Alabama and other institutions coordinated by CUAHSI. It is intended that the insight that arises from this experiment will help lay the foundation for a new national scale, high spatial resolution, near-real-time hydrologic simulation system for the United States.

  5. Principles of data integration and interoperability in the GEO Biodiversity Observation Network

    NASA Astrophysics Data System (ADS)

    Saarenmaa, Hannu; Ó Tuama, Éamonn

    2010-05-01

    The goal of the Global Earth Observation System of Systems (GEOSS) is to link existing information systems into a global and flexible network to address nine areas of critical importance to society. One of these "societal benefit areas" is biodiversity and it will be supported by a GEOSS sub-system known as the GEO Biodiversity Observation Network (GEO BON). In planning the GEO BON, it was soon recognised that there are already a multitude of existing networks and initiatives in place worldwide. What has been lacking is a coordinated framework that allows for information sharing and exchange between the networks. Traversing across the various scales of biodiversity, in particular from the individual and species levels to the ecosystems level has long been a challenge. Furthermore, some of the major regions of the world have already taken steps to coordinate their efforts, but links between the regions have not been a priority until now. Linking biodiversity data to that of the other GEO societal benefit areas, in particular ecosystems, climate, and agriculture to produce useful information for the UN Conventions and other policy-making bodies is another need that calls for integration of information. Integration and interoperability are therefore a major theme of GEO BON, and a "system of systems" is very much needed. There are several approaches to integration that need to be considered. Data integration requires harmonising concepts, agreeing on vocabularies, and building ontologies. Semantic mediation of data using these building blocks is still not easy to achieve. Agreements on, or mappings between, the metadata standards that will be used across the networks is a major requirement that will need to be addressed early on. With interoperable metadata, service integration will be possible through registry of registries systems such as GBIF's forthcoming GBDRS and the GEO Clearinghouse. Chaining various services that build intermediate products using workflow

  6. Leveraging the Semantic Web for Adaptive Education

    ERIC Educational Resources Information Center

    Kravcik, Milos; Gasevic, Dragan

    2007-01-01

    In the area of technology-enhanced learning reusability and interoperability issues essentially influence the productivity and efficiency of learning and authoring solutions. There are two basic approaches how to overcome these problems--one attempts to do it via standards and the other by means of the Semantic Web. In practice, these approaches…

  7. Best Practices for Preparing Interoperable Geospatial Data

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Beaty, T. W.

    2010-12-01

    mechanisms to advertise, visualize, and distribute the standardized geospatial data. In this presentation, we summarize the experiences learned and the best practices for geospatial data standardization. The presentation will describe how diverse and historical data archived in the ORNL DAAC were converted into standard and non-proprietary formats; what tools were used to make the conversion; how the spatial and temporal information are properly captured in a consistent manor; how to name a data file or a variable to make it both human-friendly and semantically interoperable; how NetCDF file format and CF convention can promote the data usage in ecosystem modeling user community; how those standardized geospatial data can be fed into OGC Web Services to support on-demand data visualization and access; and how the metadata should be collected and organized so that they can be discovered through standard catalog services.

  8. Interoperability of heterogeneous distributed systems

    NASA Astrophysics Data System (ADS)

    Zaschke, C.; Essendorfer, B.; Kerth, C.

    2016-05-01

    To achieve knowledge superiority in today's operations interoperability is the key. Budget restrictions as well as the complexity and multiplicity of threats combined with the fact that not single nations but whole areas are subject to attacks force nations to collaborate and share information as appropriate. Multiple data and information sources produce different kinds of data, real time and non-real time, in different formats that are disseminated to the respective command and control level for further distribution. The data is most of the time highly sensitive and restricted in terms of sharing. The question is how to make this data available to the right people at the right time with the right granularity. The Coalition Shared Data concept aims to provide a solution to these questions. It has been developed within several multinational projects and evolved over time. A continuous improvement process was established and resulted in the adaptation of the architecture as well as the technical solution and the processes it supports. Coming from the idea of making use of existing standards and basing the concept on sharing of data through standardized interfaces and formats and enabling metadata based query the concept merged with a more sophisticated service based approach. The paper addresses concepts for information sharing to facilitate interoperability between heterogeneous distributed systems. It introduces the methods that were used and the challenges that had to be overcome. Furthermore, the paper gives a perspective how the concept could be used in the future and what measures have to be taken to successfully bring it into operations.

  9. A unified structural/terminological interoperability framework based on LexEVS: application to TRANSFoRm

    PubMed Central

    Ethier, Jean-François; Dameron, Olivier; Curcin, Vasa; McGilchrist, Mark M; Verheij, Robert A; Arvanitis, Theodoros N; Taweel, Adel; Delaney, Brendan C; Burgun, Anita

    2013-01-01

    Objective Biomedical research increasingly relies on the integration of information from multiple heterogeneous data sources. Despite the fact that structural and terminological aspects of interoperability are interdependent and rely on a common set of requirements, current efforts typically address them in isolation. We propose a unified ontology-based knowledge framework to facilitate interoperability between heterogeneous sources, and investigate if using the LexEVS terminology server is a viable implementation method. Materials and methods We developed a framework based on an ontology, the general information model (GIM), to unify structural models and terminologies, together with relevant mapping sets. This allowed a uniform access to these resources within LexEVS to facilitate interoperability by various components and data sources from implementing architectures. Results Our unified framework has been tested in the context of the EU Framework Program 7 TRANSFoRm project, where it was used to achieve data integration in a retrospective diabetes cohort study. The GIM was successfully instantiated in TRANSFoRm as the clinical data integration model, and necessary mappings were created to support effective information retrieval for software tools in the project. Conclusions We present a novel, unifying approach to address interoperability challenges in heterogeneous data sources, by representing structural and semantic models in one framework. Systems using this architecture can rely solely on the GIM that abstracts over both the structure and coding. Information models, terminologies and mappings are all stored in LexEVS and can be accessed in a uniform manner (implementing the HL7 CTS2 service functional model). The system is flexible and should reduce the effort needed from data sources personnel for implementing and managing the integration. PMID:23571850

  10. A semantically rich and standardised approach enhancing discovery of sensor data and metadata

    NASA Astrophysics Data System (ADS)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise

    2016-04-01

    The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor

  11. A study on heterogeneous distributed spatial information platform based on semantic Web services

    NASA Astrophysics Data System (ADS)

    Peng, Shuang-yun; Yang, Kun; Xu, Quan-li; Huang, Bang-mei

    2008-10-01

    With the development of Semantic Web technology, the spatial information service based on ontology is an effective way for sharing and interoperation of heterogeneous information resources in the distributed network environment. This paper discusses spatial information sharing and interoperability in the Semantic Web Services architecture. Through using Ontology record spatial information in sharing knowledge system, explicit and formalization expresses the default and the concealment semantic information. It provides the prerequisite for spatial information sharing and interoperability; Through Semantic Web Services technology parses Ontology and intelligent buildings services under network environment, form a network of services. In order to realize the practical applications of spatial information sharing and interoperation in different brunches of CDC system, a prototype system for HIV/AIDS information sharing based on geo-ontology has also been developed by using the methods described above.

  12. The EuroGEOSS Brokering Framework for Multidisciplinary Interoperability

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Nativi, S.; Craglia, M.; Boldrini, E.; Vaccari, L.; Papeschi, F.; Bigagli, L.

    2011-12-01

    The Global Earth Observation System of Systems (GEOSS), envisioned by the group of eight most industrialized countries (G-8) in 2003, provides the indispensable framework to integrate the Earth observation efforts at a global level. The European Commission also contributes to the implementation of the GEOSS through research projects funded from its Framework Programme for Research & Development. The EuroGEOSS (A European Approach to GEOSS) project was launched on May 2009 for a three-year period with the aim of supporting existing Earth Observing systems and applications interoperability and use within the GEOSS and INSPIRE frameworks. EuroGEOSS developed a multidisciplinary interoperability infrastructure for the three strategic areas of Drought, Forestry and Biodiversity; this operating capacity is currently being extended to other scientific domains (i.e. Climate Change, Water, Ocean, Weather, etc.) Central to the multidisciplinary infrastructure is the "EuroGEOSS Brokering Framework", which is based on a Brokered SOA (Service Oriented Architecture) Approach. This approach extends the typical SOA archetype introducing "expert" components: the Brokers. The Brokers provide the mediation and distribution functionalities needed to interconnect the distributed and heterogeneous resources characterizing a System of Systems (SoS) environment. Such a solution addresses significant shortcomings characterizing the present SOA implementations for global frameworks, such as multiple protocols and data models interoperability. Currently, the EuroGEOSS multidisciplinary infrastructure is composed of the following brokering components: 1. The Discovery Broker: providing harmonized discovery functionalities by mediating and distributing user queries against tens of heterogeneous services. 2. The Semantic Discovery Augmentation Component: enhancing the capabilities of the discovery broker with semantic query-expansion. 3. The Data Access Broker: enabling users to seamlessly

  13. Extending the GI Brokering Suite to Support New Interoperability Specifications

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by

  14. Developing Interoperable Air Quality Community Portals

    NASA Astrophysics Data System (ADS)

    Falke, S. R.; Husar, R. B.; Yang, C. P.; Robinson, E. M.; Fialkowski, W. E.

    2009-04-01

    Web portals are intended to provide consolidated discovery, filtering and aggregation of content from multiple, distributed web sources targeted at particular user communities. This paper presents a standards-based information architectural approach to developing portals aimed at air quality community collaboration in data access and analysis. An important characteristic of the approach is to advance beyond the present stand-alone design of most portals to achieve interoperability with other portals and information sources. We show how using metadata standards, web services, RSS feeds and other Web 2.0 technologies, such as Yahoo! Pipes and del.icio.us, helps increase interoperability among portals. The approach is illustrated within the context of the GEOSS Architecture Implementation Pilot where an air quality community portal is being developed to provide a user interface between the portals and clearinghouse of the GEOSS Common Infrastructure and the air quality community catalog of metadata and data services.

  15. An Interoperability Testing Study: Automotive Inventory Visibility and Interoperability

    SciTech Connect

    Ivezic, Nenad; Kulvatunyou, Boonserm; Frechette, Simon; Jones, Albert

    2004-01-01

    This paper describes a collaborative effort between the NIST and Korean Business-to-Business Interoperability Test Beds to support a global, automotive-industry interoperability project. The purpose of the collaboration is to develop a methodology for validation of interoperable data-content standards implemented across inventory visibility tools within an internationally adopted testing framework. In this paper we describe methods (1) to help the vendors consistently implement prescribed message standards and (2) to assess compliance of those implementations with respect to the prescribed data content standards. We also illustrate these methods in support of an initial proof of concept for an international IV&I scenario.

  16. Neuro-Semantics and Semantics.

    ERIC Educational Resources Information Center

    Holmes, Stewart W.

    1987-01-01

    Draws distinctions between the terms semantics (dealing with such verbal parameters as dictionaries and "laws" of logic and rhetoric), general semantics (semantics, plus the complex, dynamic, organismal properties of human beings and their physical environment), and neurosemantics (names for relations-based input from the neurosensory system, and…

  17. ICD-11 (JLMMS) and SCT Inter-Operation.

    PubMed

    Mamou, Marzouk; Rector, Alan; Schulz, Stefan; Campbell, James; Solbrig, Harold; Rodrigues, Jean-Marie

    2016-01-01

    The goal of this work is to contribute to a smooth and semantically sound inter-operability between the ICD-11 (International Classification of Diseases-11th revision Joint Linearization for Mortality, Morbidity and Statistics) and SNOMED CT (SCT). To guarantee such inter-operation between a classification, characterized by a single hierarchy of mutually exclusive and exhaustive classes, as is the JLMMS successor of ICD-10 on the one hand, and the multi-hierarchical, ontology-based clinical terminology SCT on the other hand, we use ontology axioms that logically express generalizable truths. This is expressed by the compositional grammar of SCT, together with queries on axiomsof SCT. We test the feasibility of the method on the circulatory chapter of ICD-11 JLMMS and present limitations and results. PMID:27139413

  18. 76 FR 2598 - Requests for Waiver of Various Petitioners To Allow the Establishment of 700 MHz Interoperable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-14

    ... roaming capabilities and system identifiers, that are crucial to ensuring that the users of disparate... networks achieve a baseline of operability sufficient to support interoperable communications....

  19. Fusion is possible only with interoperability agreements; the GEOSS experience

    NASA Astrophysics Data System (ADS)

    Percivall, G.

    2008-12-01

    Data fusion is defined for this session as the merging of disparate data sources for multidisciplinary study. Implicit in this definition is that the data consumer may not be intimately familiar with the data sources. In order to achieve fusion of the data, there must be generalized concepts that apply to both the data sources and consumer; and those concepts must be implemented in our information systems. The successes of GEOSS depend on data and information providers accepting and implementing a set of interoperability arrangements, including technical specifications for collecting, processing, storing, and disseminating shared data, metadata, and products. GEOSS interoperability is based on non-proprietary standards, with preference to formal international standards. GEOSS requires a scientific basis for the collection, processing and interpretation of the data. Use of standards is a hallmark of a sound scientific basis. In order communicate effectively to achieve data fusion, interoperability arrangements must be based upon sound scientific principles that have been implemented in efficient and effective tools. Establishing such interoperability arrangements depends upon social processes and technology. Through the use of Interoperability Arrangements based upon standards, GEOSS achieves data fusion to in order to answer humanities critical questions. Decision making in support of societal benefit areas depends upon data fusion in multidisciplinary settings.

  20. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility.

    PubMed

    Marcos, Mar; Maldonado, Jose A; Martínez-Salvador, Begoña; Boscá, Diego; Robles, Montserrat

    2013-08-01

    patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development. PMID:23707417

  1. making technology talk: how interoperability can improve care, drive effidcincy, and reduce waste.

    PubMed

    Cantwell, Ed; McDermott, Kerry

    2016-05-01

    Health systems and providers that search out more interoperable technology can help move the industry toward greater productivity using a multistep approach. Assemble a team of champions to address the interoperability issue. Describe the desired state of interoperability for the organization. Assess the current state of interoperability in the organization. Identify the gaps between the current state and the desired state. Develop a road map for addressing the gaps. Achieve an immediate win by selecting a quickly attainable goal. Maintain focus and communicate successes. PMID:27382711

  2. Spinning Interoperable Applications for Teaching & Learning Using the Simple Query Interface

    ERIC Educational Resources Information Center

    van Assche, Frans; Duval, Erik; Massart, David; Olmedilla, Daniel; Simon, Bernd; Sobernig, Stefan; Ternier, Stefaan; Wild, Fridolin

    2006-01-01

    The Web puts a huge number of learning resources within reach of anyone with Internet access. However, many valuable resources are difficult to find due to the lack of interoperability among learning repositories. In order to achieve interoperability, implementers require a common query framework. This paper discusses a set of methods referred to…

  3. Modeling and formal representation of geospatial knowledge for the Geospatial Semantic Web

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Gong, Jianya

    2008-12-01

    GML can only achieve geospatial interoperation at syntactic level. However, it is necessary to resolve difference of spatial cognition in the first place in most occasions, so ontology was introduced to describe geospatial information and services. But it is obviously difficult and improper to let users to find, match and compose services, especially in some occasions there are complicated business logics. Currently, with the gradual introduction of Semantic Web technology (e.g., OWL, SWRL), the focus of the interoperation of geospatial information has shifted from syntactic level to Semantic and even automatic, intelligent level. In this way, Geospatial Semantic Web (GSM) can be put forward as an augmentation to the Semantic Web that additionally includes geospatial abstractions as well as related reasoning, representation and query mechanisms. To advance the implementation of GSM, we first attempt to construct the mechanism of modeling and formal representation of geospatial knowledge, which are also two mostly foundational phases in knowledge engineering (KE). Our attitude in this paper is quite pragmatical: we argue that geospatial context is a formal model of the discriminate environment characters of geospatial knowledge, and the derivation, understanding and using of geospatial knowledge are located in geospatial context. Therefore, first, we put forward a primitive hierarchy of geospatial knowledge referencing first order logic, formal ontologies, rules and GML. Second, a metamodel of geospatial context is proposed and we use the modeling methods and representation languages of formal ontologies to process geospatial context. Thirdly, we extend Web Process Service (WPS) to be compatible with local DLL for geoprocessing and possess inference capability based on OWL.

  4. A Research on E - learning Resources Construction Based on Semantic Web

    NASA Astrophysics Data System (ADS)

    Rui, Liu; Maode, Deng

    Traditional e-learning platforms have the flaws that it's usually difficult to query or positioning, and realize the cross platform sharing and interoperability. In the paper, the semantic web and metadata standard is discussed, and a kind of e - learning system framework based on semantic web is put forward to try to solve the flaws of traditional elearning platforms.

  5. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  6. Smart Grid Interoperability Maturity Model

    SciTech Connect

    Widergren, Steven E.; Levinson, Alex; Mater, J.; Drummond, R.

    2010-04-28

    The integration of automation associated with electricity resources (including transmission and distribution automation and demand-side resources operated by end-users) is key to supporting greater efficiencies and incorporating variable renewable resources and electric vehicles into the power system. The integration problems faced by this community are analogous to those faced in the health industry, emergency services, and other complex communities with many stakeholders. To highlight this issue and encourage communication and the development of a smart grid interoperability community, the GridWise Architecture Council (GWAC) created an Interoperability Context-Setting Framework. This "conceptual model" has been helpful to explain the importance of organizational alignment in addition to technical and informational interface specifications for "smart grid" devices and systems. As a next step to building a community sensitive to interoperability, the GWAC is investigating an interoperability maturity model (IMM) based on work done by others to address similar circumstances. The objective is to create a tool or set of tools that encourages a culture of interoperability in this emerging community. The tools would measure status and progress, analyze gaps, and prioritize efforts to improve the situation.

  7. Advancing Smart Grid Interoperability and Implementing NIST's Interoperability Roadmap

    SciTech Connect

    Basso,T.; DeBlasio, R.

    2010-04-01

    The IEEE American National Standards project P2030TM addressing smart grid interoperability and the IEEE 1547 series of standards addressing distributed resources interconnection with the grid have been identified in priority action plans in the Report to NIST on the Smart Grid Interoperability Standards Roadmap. This paper presents the status of the IEEE P2030 development, the IEEE 1547 series of standards publications and drafts, and provides insight on systems integration and grid infrastructure. The P2030 and 1547 series of standards are sponsored by IEEE Standards Coordinating Committee 21.

  8. Internet-Based Solutions for Manufacturing Enterprise Systems Interoperability - A Standards Perspective

    SciTech Connect

    Ivezic, Nenad; Kulvatunyou, Boonserm; Jones, Albert

    2004-10-01

    This chapter reviews efforts of selected standards consortia to develop Internet-based approaches for interoperable manufacturing enterprise information systems. The focus of the chapter is on the efforts to capture common meaning of data exchanged among interoperable information systems inside and outside a manufacturing enterprise. We start this chapter by giving a general overview of the key concepts in standards approaches to enable interoperable manufacturing enterprise systems. These approaches are compared on the basis of several characteristics found in standards frameworks such as horizontal or vertical focus of the standard, the standard message content definitions, the standard process definitions, and dependence on specific standard messaging solutions. After this initial overview, we establish one basis for reasoning about interoperable information systems by recognizing key manufacturing enterprise objects managed and exchanged both inside and outside the enterprise. Such conceptual objects are coarse in granularity and are meant to drive semantic definitions of data interchanges by providing a shared context for data dictionaries detailing the semantics of these objects and interactions or processes involved in data exchange. In the case of intra-enterprise interoperability, we recognize enterprise information processing activities, responsibilities, and those high-level conceptual objects exchanged in interactions among systems to fulfill the assigned responsibilities. Here, we show a mapping of one content standard onto the identified conceptual objects. In the case of inter-enterprise interoperability, we recognize key business processes areas and enumerate high-level conceptual objects that need to be exchanged among supply chain or trading partners. Here, we also show example mappings of representative content standards onto the identified conceptual objects. We complete this chapter by providing an account of some advanced work to enhance

  9. Community-oriented Implementation of Interoperability Standards (Invited)

    NASA Astrophysics Data System (ADS)

    Falke, S. R.

    2010-12-01

    Standards are necessary for interoperability but alone they are insufficient for attaining interoperability among information systems. An important characteristic, and key challenge, of interoperability is the implementation of standards. Standards are interpreted differently by different organizations and a result is a lack of interoperability despite each organization being able to rightfully claim they support standards. In this talk, the focus is on data access standards, particularly the spatial-temporal filtering and subsetting through the Open Geospatial Consortium (OGC) Web Coverage Service (WCS), Web Map Service (WMS), and Sensor Observation Service (SOS) standards. The talk will highlight a technology infusion strategy of collaboratively working within a domain community in order to achieve standards implementation conventions - commonly accepted methods of implementing standards across a particular community. The approach of community-oriented conventions for standards implementation allows multiple groups to bring their individual approaches to the table, share experiences, identify particular aspects of the standards where they must reconcile differences, and develop sets of best practices for others in the community to follow for creating networked and interoperable web services. The primary example used to highlight the approach is the ongoing interoperability efforts of the Committee on Earth Observation Satellites (CEOS) Atmospheric Composition Portal that is collaboratively developing best practices for implementing standards in publishing and using remotely sensed atmospheric composition data. Secondary examples are provided from the Global Earth Observation System of Systems (GEOSS) Architecture Implementation Pilot (AIP) Air Quality & Health Working Group, Geo-interface for Air, Land, Earth, Oceans NetCDF Interoperability Experiment (GALEON), and Federation of Earth Science Information Partners (ESIP) Air Quality Working Group.

  10. D-ATM, a working example of health care interoperability: From dirt path to gravel road.

    PubMed

    DeClaris, John-William

    2009-01-01

    For many years, there have been calls for interoperability within health care systems. The technology currently exists and is being used in business areas like banking and commerce, to name a few. Yet the question remains, why has interoperability not been achieved in health care? This paper examines issues encountered and success achieved with interoperability during the development of the Digital Access To Medication (D-ATM) project, sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA). D-ATM is the first government funded interoperable patient management system. The goal of this paper is to provide lessons learned and propose one possible road map for health care interoperability within private industry and how government can help. PMID:19963614

  11. An ontological system for interoperable spatial generalisation in biodiversity monitoring

    NASA Astrophysics Data System (ADS)

    Nieland, Simon; Moran, Niklas; Kleinschmit, Birgit; Förster, Michael

    2015-11-01

    Semantic heterogeneity remains a barrier to data comparability and standardisation of results in different fields of spatial research. Because of its thematic complexity, differing acquisition methods and national nomenclatures, interoperability of biodiversity monitoring information is especially difficult. Since data collection methods and interpretation manuals broadly vary there is a need for automatised, objective methodologies for the generation of comparable data-sets. Ontology-based applications offer vast opportunities in data management and standardisation. This study examines two data-sets of protected heathlands in Germany and Belgium which are based on remote sensing image classification and semantically formalised in an OWL2 ontology. The proposed methodology uses semantic relations of the two data-sets, which are (semi-)automatically derived from remote sensing imagery, to generate objective and comparable information about the status of protected areas by utilising kernel-based spatial reclassification. This automatised method suggests a generalisation approach, which is able to generate delineation of Special Areas of Conservation (SAC) of the European biodiversity Natura 2000 network. Furthermore, it is able to transfer generalisation rules between areas surveyed with varying acquisition methods in different countries by taking into account automated inference of the underlying semantics. The generalisation results were compared with the manual delineation of terrestrial monitoring. For the different habitats in the two sites an accuracy of above 70% was detected. However, it has to be highlighted that the delineation of the ground-truth data inherits a high degree of uncertainty, which is discussed in this study.

  12. The Effect of Adjunct Post-Questions, Metacognitive Process Prompts, Cognitive Feedback and Training in Facilitating Student Achievement from Semantic Maps

    ERIC Educational Resources Information Center

    Yamashiro, Kelly Ann C.; Dwyer, Francis

    2006-01-01

    The purpose of this study was to examine the instructional effectiveness of adjunct post-questions, metacognitive process prompts, cognitive feedback and training in complementing semantic maps. Two hundred seventy Taiwanese subjects were randomly assigned to eight treatments. After interacting with their respective treatments each completed three…

  13. Semantic Web for Manufacturing Web Services

    SciTech Connect

    Kulvatunyou, Boonserm; Ivezic, Nenad

    2002-06-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to rapidly and cost-effectively develop products, production facilities and supporting software is becoming urgent. The use of a virtual enterprise plays a vital role in surviving turbulent markets. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners' services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can widely interoperate in an unambiguous and autonomous manner; hence, virtual enterprise is realizable at a low cost.

  14. Exploring NASA GES DISC Data with Interoperable Services

    NASA Technical Reports Server (NTRS)

    Zhao, Peisheng; Yang, Wenli; Hegde, Mahabal; Wei, Jennifer C.; Kempler, Steven; Pham, Long; Teng, William; Savtchenko, Andrey

    2015-01-01

    Overview of NASA GES DISC (NASA Goddard Earth Science Data and Information Services Center) data with interoperable services: Open-standard and Interoperable Services Improve data discoverability, accessibility, and usability with metadata, catalogue and portal standards Achieve data, information and knowledge sharing across applications with standardized interfaces and protocols Open Geospatial Consortium (OGC) Data Services and Specifications Web Coverage Service (WCS) -- data Web Map Service (WMS) -- pictures of data Web Map Tile Service (WMTS) --- pictures of data tiles Styled Layer Descriptors (SLD) --- rendered styles.

  15. Potential interoperability problems facing multi-site radiation oncology centers in The Netherlands

    NASA Astrophysics Data System (ADS)

    Scheurleer, J.; Koken, Ph; Wessel, R.

    2014-03-01

    Aim: To identify potential interoperability problems facing multi-site Radiation Oncology (RO) departments in the Netherlands and solutions for unambiguous multi-system workflows. Specific challenges confronting the RO department of VUmc (RO-VUmc), which is soon to open a satellite department, were characterized. Methods: A nationwide questionnaire survey was conducted to identify possible interoperability problems and solutions. Further detailed information was obtained by in-depth interviews at 3 Dutch RO institutes that already operate in more than one site. Results: The survey had a 100% response rate (n=21). Altogether 95 interoperability problems were described. Most reported problems were on a strategic and semantic level. The majority were DICOM(-RT) and HL7 related (n=65), primarily between treatment planning and verification systems or between departmental and hospital systems. Seven were identified as being relevant for RO-VUmc. Departments have overcome interoperability problems with their own, or with tailor-made vendor solutions. There was little knowledge about or utilization of solutions developed by Integrating the Healthcare Enterprise Radiation Oncology (IHE-RO). Conclusions: Although interoperability problems are still common, solutions have been identified. Awareness of IHE-RO needs to be raised. No major new interoperability problems are predicted as RO-VUmc develops into a multi-site department.

  16. Future of unmanned systems interoperability

    NASA Astrophysics Data System (ADS)

    Ackley, John J.; Wade, Robert L.; Gehring, Daniel G.

    2006-05-01

    There are many challenges in the area of interoperability of unmanned systems: increasing levels of autonomy, teaming and collaboration, long endurance missions, integration with civilian and military spaces. Several currently available methods and technologies may aid in meeting these and other challenges: consensus standards development, formal methods, model-based engineering, knowledge and ontology representation, agent-based systems, and plan language research. We believe the future of unmanned systems interoperability depends on the integration of these methods and technologies into a domain-independent plan language for unmanned systems.

  17. CCP interoperability and system stability

    NASA Astrophysics Data System (ADS)

    Feng, Xiaobing; Hu, Haibo

    2016-09-01

    To control counterparty risk, financial regulations such as the Dodd-Frank Act are increasingly requiring standardized derivatives trades to be cleared by central counterparties (CCPs). It is anticipated that in the near term future, CCPs across the world will be linked through interoperability agreements that facilitate risk sharing but also serve as a conduit for transmitting shocks. This paper theoretically studies a networked network with CCPs that are linked through interoperability arrangements. The major finding is that the different configurations of networked network CCPs contribute to the different properties of the cascading failures.

  18. Dynamic Business Networks: A Headache for Sustainable Systems Interoperability

    NASA Astrophysics Data System (ADS)

    Agostinho, Carlos; Jardim-Goncalves, Ricardo

    Collaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property. When achieved seamlessly, efficiency is increased in the entire product life cycle. Nowadays, most organizations try to attain interoperability by establishing peer-to-peer mappings with the different partners, or in optimized networks, by using international standard models as the core for information exchange. In current industrial practice, mappings are only defined once, and the morphisms that represent them, are hardcoded in the enterprise systems. This solution has been effective for static environments, where enterprise and product models are valid for decades. However, with an increasingly complex and dynamic global market, models change frequently to answer new customer requirements. This paper draws concepts from the complex systems science and proposes a framework for sustainable systems interoperability in dynamic networks, enabling different organizations to evolve at their own rate.

  19. A Review of Ontologies with the Semantic Web in View.

    ERIC Educational Resources Information Center

    Ding, Ying

    2001-01-01

    Discusses the movement of the World Wide Web from the first generation to the second, called the Semantic Web. Provides an overview of ontology, a philosophical theory about the nature of existence being applied to artificial intelligence that will have a crucial role in enabling content-based access, interoperability, and communication across the…

  20. Improving Groundwater Data Interoperability: Results of the Second OGC Groundwater Interoperability Experiment

    NASA Astrophysics Data System (ADS)

    Lucido, J. M.; Booth, N.

    2014-12-01

    Interoperable sharing of groundwater data across international boarders is essential for the proper management of global water resources. However storage and management of groundwater data is often times distributed across many agencies or organizations. Furthermore these data may be represented in disparate proprietary formats, posing a significant challenge for integration. For this reason standard data models are required to achieve interoperability across geographical and political boundaries. The GroundWater Markup Language 1.0 (GWML1) was developed in 2010 as an extension of the Geography Markup Language (GML) in order to support groundwater data exchange within Spatial Data Infrastructures (SDI). In 2013, development of GWML2 was initiated under the sponsorship of the Open Geospatial Consortium (OGC) for intended adoption by the international community as the authoritative standard for the transfer of groundwater feature data, including data about water wells, aquifers, and related entities. GWML2 harmonizes GWML1 and the EU's INSPIRE models related to geology and hydrogeology. Additionally, an interoperability experiment was initiated to test the model for commercial, technical, scientific, and policy use cases. The scientific use case focuses on the delivery of data required for input into computational flow modeling software used to determine the flow of groundwater within a particular aquifer system. It involves the delivery of properties associated with hydrogeologic units, observations related to those units, and information about the related aquifers. To test this use case web services are being implemented using GWML2 and WaterML2, which is the authoritative standard for water time series observations, in order to serve USGS water well and hydrogeologic data via standard OGC protocols. Furthermore, integration of these data into a computational groundwater flow model will be tested. This submission will present the GWML2 information model and results

  1. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  2. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  3. Semantic Mapping.

    ERIC Educational Resources Information Center

    Johnson, Dale D.; And Others

    1986-01-01

    Describes semantic mapping, an effective strategy for vocabulary instruction that involves the categorical structuring of information in graphic form and requires students to relate new words to their own experience and prior knowledge. (HOD)

  4. Interoperability with Moby 1.0--it's better than sharing your toothbrush!

    PubMed

    Wilkinson, Mark D; Senger, Martin; Kawas, Edward; Bruskiewich, Richard; Gouzy, Jerome; Noirot, Celine; Bardou, Philippe; Ng, Ambrose; Haase, Dirk; Saiz, Enrique de Andres; Wang, Dennis; Gibbons, Frank; Gordon, Paul M K; Sensen, Christoph W; Carrasco, Jose Manuel Rodriguez; Fernández, José M; Shen, Lixin; Links, Matthew; Ng, Michael; Opushneva, Nina; Neerincx, Pieter B T; Leunissen, Jack A M; Ernst, Rebecca; Twigger, Simon; Usadel, Bjorn; Good, Benjamin; Wong, Yan; Stein, Lincoln; Crosby, William; Karlsson, Johan; Royo, Romina; Párraga, Iván; Ramírez, Sergio; Gelpi, Josep Lluis; Trelles, Oswaldo; Pisano, David G; Jimenez, Natalia; Kerhornou, Arnaud; Rosset, Roman; Zamacola, Leire; Tarraga, Joaquin; Huerta-Cepas, Jaime; Carazo, Jose María; Dopazo, Joaquin; Guigo, Roderic; Navarro, Arcadi; Orozco, Modesto; Valencia, Alfonso; Claros, M Gonzalo; Pérez, Antonio J; Aldana, Jose; Rojano, M Mar; Fernandez-Santa Cruz, Raul; Navas, Ismael; Schiltz, Gary; Farmer, Andrew; Gessler, Damian; Schoof, Heiko; Groscurth, Andreas

    2008-05-01

    The BioMoby project was initiated in 2001 from within the model organism database community. It aimed to standardize methodologies to facilitate information exchange and access to analytical resources, using a consensus driven approach. Six years later, the BioMoby development community is pleased to announce the release of the 1.0 version of the interoperability framework, registry Application Programming Interface and supporting Perl and Java code-bases. Together, these provide interoperable access to over 1400 bioinformatics resources worldwide through the BioMoby platform, and this number continues to grow. Here we highlight and discuss the features of BioMoby that make it distinct from other Semantic Web Service and interoperability initiatives, and that have been instrumental to its deployment and use by a wide community of bioinformatics service providers. The standard, client software, and supporting code libraries are all freely available at http://www.biomoby.org/. PMID:18238804

  5. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  6. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web

  7. The interoperability force in the ERP field

    NASA Astrophysics Data System (ADS)

    Boza, Andrés; Cuenca, Llanos; Poler, Raúl; Michaelides, Zenon

    2015-04-01

    Enterprise resource planning (ERP) systems participate in interoperability projects and this participation sometimes leads to new proposals for the ERP field. The aim of this paper is to identify the role that interoperability plays in the evolution of ERP systems. To go about this, ERP systems have been first identified within interoperability frameworks. Second, the initiatives in the ERP field driven by interoperability requirements have been identified from two perspectives: technological and business. The ERP field is evolving from classical ERP as information system integrators to a new generation of fully interoperable ERP. Interoperability is changing the way of running business, and ERP systems are changing to adapt to the current stream of interoperability.

  8. Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna

    2015-04-01

    Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org

  9. Security message exchange interoperability scenarios

    SciTech Connect

    Tarman, Thomas

    1998-07-01

    This contribution describes three interoperability scenarios for the ATM Security Message Exchange (SME) protocol. These scenarios include network-wide signaling support for the Security Services Information Element, partial signaling support wherethe SSIE is only supported in private or workgroup ATM networks, and the case where the SSIE is nonsupported by any network elements (exceptthosethat implement security services). Explanatory text is proposed for inclusion infection 2.3 of the ATM Security Specification, Version 1.0.

  10. Enabling Interoperability in Heliophysical Domains

    NASA Astrophysics Data System (ADS)

    Bentley, Robert

    2013-04-01

    There are many aspects of science in the Solar System that are overlapping - phenomena observed in one domain can have effects in other domains. However, there are many problems related to exploiting the data in cross-disciplinary studies because of lack of interoperability of the data and services. The CASSIS project is a Coordination Action funded under FP7 that has the objective of improving the interoperability of data and services related Solar System science. CASSIS has been investigating how the data could be made more accessible with some relatively minor changes to the observational metadata. The project has been looking at the services that are used within the domain and determining whether they are interoperable with each other and if not what would be required make them so. It has also been examining all types of metadata that are used when identifying and using observations and trying to make them more compliant with techniques and standards developed by bodies such as the International Virtual Observatory Alliance (IVOA). Many of the lessons that are being learnt in the study are applicable to domains that go beyond those directly involved in heliophysics. Adopting some simple standards related to the design of the services interfaces and metadata that are used would make it much easier to investigate interdisciplinary science topics. We will report on our finding and describe a roadmap for the future. For more information about CASSIS, please visit the project Web site on cassis-vo.eu

  11. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  12. A semantically enriched clinical guideline model enabling deployment in heterogeneous healthcare environments.

    PubMed

    Laleci, Gokce B; Dogac, Asuman

    2009-03-01

    Clinical guidelines are developed to assist healthcare practitioners to make decisions on patient's medical problems, and as such they communicate with external applications to retrieve patient data to initiate medical actions through clinical workflows, and transmit information to alert/reminder systems. The interoperability problems in the healthcare information technology domain prevent wider deployment of clinical guidelines because each deployment requires a tedious custom adaptation phase. In this paper, we provide machine-processable mechanisms that express the semantics of clinical guideline interfaces so that automated processes can be used to access the clinical resources for guideline deployment and execution. To be able to deploy the semantically extended guidelines to healthcare settings semiautomatically, the underlying application's semantics must also be available. We describe how this can be achieved based on two prominent implementation technologies in use in the eHealth domain: integrating healthcare enterprise cross-enterprise document sharing integration profile for discovering and exchanging electronic healthcare records and Web service technology for interacting with the clinical workflows and wireless medical sensor devices. The system described in this paper is realized within the scope of the SAPHIRE Project. PMID:19171525

  13. Vocabulary services to support scientific data interoperability

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Mills, Katie; Tan, Florence

    2013-04-01

    Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their

  14. Interoperability Between Geoscience And Geospatial Catalog Protocols

    NASA Astrophysics Data System (ADS)

    Hu, C.; di, L.; Yang, W.; Lynnes, C.; Domenico, B.; Rutledge, G. K.; Enloe, Y.

    2007-12-01

    In the past several years, interoperability gaps have made cross-protocol and cross-community data access a challenge within the Earth science community. One such gap is between two protocol families developed within the geospatial and Earth science communities. The Earth science community has developed a family of related geoscience protocols that includes OPeNDAP for data access and the Thematic Real-time Environmental Distributed Data Services (THREDDS) catalog capability. The corresponding protocols in the geospatial community are the Open Geospatial Consortium (OGC) protocols Web Coverage Service for geospatial data access and Catalog Services for Web (CSW) for data search. We have developed a catalog gateway to mediate client/server interactions between OGC catalog clients and THREDDS servers. In essence, the gateway is an OGC Catalog server that enables OGC clients to search for data registered in THREDDS catalogs. The gateway comprises two parts: the CSW server and a THREDDS-to-CSW ingestion tool. There are two key challenges in constructing such gateway, the first is to define the mapping relationship between the catalog metadata schema of CSW and that of the THREDDS, and the second one is to ingest the THREDDS catalog content into the CSW server. Since our CSW server is based on the ISO19115/ISO19119 Application Profile, a key challenge is to semantically map the ISO 19115 metadata attributes in ISO Application Profile to the THREDDS metadata attributes in the THREDDS Dataset Inventory Catalog Specification Version 1.0. With the mapping established, tools that translate the THREDDS catalog information model into the CSW/ISO Profile information model were developed. These dynamically poll THREDDS catalog servers and ingest the THREDDS catalog information into the CSW server's database, maintaining the hierarchical relationships inherent in the THREDDS catalogs. A prototype system has been implemented to demonstrate the concept and approach.

  15. Using ontological inference and hierarchical matchmaking to overcome semantic heterogeneity in remote sensing-based biodiversity monitoring

    NASA Astrophysics Data System (ADS)

    Nieland, Simon; Kleinschmit, Birgit; Förster, Michael

    2015-05-01

    Ontology-based applications hold promise in improving spatial data interoperability. In this work we use remote sensing-based biodiversity information and apply semantic formalisation and ontological inference to show improvements in data interoperability/comparability. The proposed methodology includes an observation-based, "bottom-up" engineering approach for remote sensing applications and gives a practical example of semantic mediation of geospatial products. We apply the methodology to three different nomenclatures used for remote sensing-based classification of two heathland nature conservation areas in Belgium and Germany. We analysed sensor nomenclatures with respect to their semantic formalisation and their bio-geographical differences. The results indicate that a hierarchical and transparent nomenclature is far more important for transferability than the sensor or study area. The inclusion of additional information, not necessarily belonging to a vegetation class description, is a key factor for the future success of using semantics for interoperability in remote sensing.

  16. Generative Semantics

    ERIC Educational Resources Information Center

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  17. Knowledge-oriented semantics modelling towards uncertainty reasoning.

    PubMed

    Mohammed, Abdul-Wahid; Xu, Yang; Liu, Ming

    2016-01-01

    Distributed reasoning in M2M leverages the expressive power of ontology to enable semantic interoperability between heterogeneous systems of connected devices. Ontology, however, lacks the built-in, principled support to effectively handle the uncertainty inherent in M2M application domains. Thus, efficient reasoning can be achieved by integrating the inferential reasoning power of probabilistic representations with the first-order expressiveness of ontology. But there remains a gap with current probabilistic ontologies since state-of-the-art provides no compatible representation for simultaneous handling of discrete and continuous quantities in ontology. This requirement is paramount, especially in smart homes, where continuous quantities cannot be avoided, and simply mapping continuous information to discrete states through quantization can cause a great deal of information loss. In this paper, we propose a hybrid probabilistic ontology that can simultaneously handle distributions over discrete and continuous quantities in ontology. We call this new framework HyProb-Ontology, and it specifies distributions over properties of classes, which serve as templates for instances of classes to inherit as well as overwrite some aspects. Since there cannot be restriction on the dependency topology of models that HyProb-Ontology can induce across different domains, we can achieve a unified Ground Hybrid Probabilistic Model by conditional Gaussian fuzzification of the distributions of the continuous variables in ontology. From the results of our experiments, this unified model can achieve exact inference with better performance over classical Bayesian networks. PMID:27350935

  18. HTML5 microdata as a semantic container for medical information exchange.

    PubMed

    Kimura, Eizen; Kobayashi, Shinji; Ishihara, Ken

    2014-01-01

    Achieving interoperability between clinical electronic medical records (EMR) systems and cloud computing systems is challenging because of the lack of a universal reference method as a standard for information exchange with a secure connection. Here we describe an information exchange scheme using HTML5 microdata, where the standard semantic container is an HTML document. We embed HL7 messages describing laboratory test results in the microdata. We also annotate items in the clinical research report with the microdata. We mapped the laboratory test result data into the clinical research report using an HL7 selector specified in the microdata. This scheme can provide secure cooperation between the cloud-based service and the EMR system. PMID:25160218

  19. Interoperability for Space Mission Monitor and Control: Applying Technologies from Manufacturing Automation and Process Control Industries

    NASA Technical Reports Server (NTRS)

    Jones, Michael K.

    1998-01-01

    Various issues associated with interoperability for space mission monitor and control are presented in viewgraph form. Specific topics include: 1) Space Project Mission Operations Control Architecture (SuperMOCA) goals and methods for achieving them; 2) Specifics on the architecture: open standards ad layering, enhancing interoperability, and promoting commercialization; 3) An advertisement; 4) Status of the task - government/industry cooperation and architecture and technology demonstrations; and 5) Key features of messaging services and virtual devices.

  20. 23 CFR 950.7 - Interoperability requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Interoperability requirements. 950.7 Section 950.7 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION INTELLIGENT TRANSPORTATION SYSTEMS ELECTRONIC TOLL COLLECTION § 950.7 Interoperability requirements. (a) For any toll facility operating pursuant to authority under a 1604 toll...

  1. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  2. Integrated Semantics Service Platform for the Internet of Things: A Case Study of a Smart Office

    PubMed Central

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  3. Space Network Interoperability Panel (SNIP) study

    NASA Technical Reports Server (NTRS)

    Ryan, Thomas; Lenhart, Klaus; Hara, Hideo

    1991-01-01

    The Space Network Interoperability Panel (SNIP) study is a tripartite study that involves the National Aeronautics and Space Administration (NASA), the European Space Agency (ESA), and the National Space Development Agency (NASDA) of Japan. SNIP involves an ongoing interoperability study of the Data Relay Satellite (DRS) Systems of the three organizations. The study is broken down into two parts; Phase one deals with S-band (2 GHz) interoperability and Phase two deals with Ka-band (20/30 GHz) interoperability (in addition to S-band). In 1987 the SNIP formed a Working Group to define and study operations concepts and technical subjects to assure compatibility of the international data relay systems. Since that time a number of Panel and Working Group meetings have been held to continue the study. Interoperability is of interest to the three agencies because it offers a number of potential operation and economic benefits. This paper presents the history and status of the SNIP study.

  4. Live Social Semantics

    NASA Astrophysics Data System (ADS)

    Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain

    Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.

  5. Semantic Clustering of Search Engine Results

    PubMed Central

    Soliman, Sara Saad; El-Sayed, Maged F.; Hassan, Yasser F.

    2015-01-01

    This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision. PMID:26933673

  6. Comparative Studies of Semantic Structure. Final Report.

    ERIC Educational Resources Information Center

    Burton, Michael L.

    The objective of this research was to successfully model several semantic domains in English and Spanish, in order to (a) test the reliability of judged-similarities tasks in cross-cultural situations and (b) obtain information about changes in semantic organization with bilingualism and education. To achieve these goals, data were collected in…

  7. Building interoperable health information systems using agent and workflow technologies.

    PubMed

    Koufi, Vassiliki; Malamateniou, Flora; Vassilacopoulos, George

    2009-01-01

    Healthcare is an increasingly collaborative enterprise involving many individuals and organizations that coordinate their efforts toward promoting quality and efficient delivery of healthcare through the use of interoperable healthcare information systems. This paper presents a mediator-based approach for achieving data and service interoperability among disparate and geographically dispersed healthcare information systems. The proposed system architecture enables decoupling of the client applications and the server-side implementations while it ensures security in all transactions. It is a distributed system architecture based on the agent-oriented paradigm for communication and life cycle management while interactions are described according to the workflow metaphor. Thus robustness, high flexibility and fault tolerance are provided in an environment as dynamic and heterogeneous as healthcare. PMID:19745293

  8. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  9. Satellite-Terrestrial Network Interoperability

    NASA Technical Reports Server (NTRS)

    vonDeak, Thomas C.

    1998-01-01

    The developing national and global information infrastructures (NII/GII) are being built upon the asynchronous transfer mode (ATM) telecommunications protocol and associated protocol standards. These protocols are themselves under development through the telecommunications standards process defined by the International Telecommunications Union (ITU), which as a body is sanctioned by the United Nations. All telecommunications manufacturers use these standards to create products that can interoperate. The ITU has recognized the ATM Forum as the instrument for the development of ATM protocols. This forum is a consortium of industry, academia, and government entities formed to quickly develop standards for the ATM infrastructure. However, because the participants represent a predominately terrestrial network viewpoint, the use of satellites in the national and global information infrastructures could be severely compromised. Consequently, through an ongoing task order, the NASA Lewis Research Center asked Sterling Software, Inc., to communicate with the ATM Forum in support of the interoperability of satellite-terrestrial networks. This year, Dr. Raj Jain of the Ohio State University, under contract to Sterling, authored or coauthored 32 explanatory documents delivered to the ATM Forum in the areas of Guaranteed Frame Rate for Transmission Control Protocol/Internet Protocol (TCP/IP), Available Bit Rate, performance testing, Variable Bit Rate voice over ATM, TCP over Unspecified Bit Rate+, Virtual Source/Virtual Destination, and network management. These contributions have had a significant impact on the content of the standards that the ATM Forum is developing. Some of the more significant accomplishments have been: (1) The adoption by the ATM Forum of a new definition for Message-In, Message-Out latency; and (2) Improved text (clearer wording and newly defined terms) for measurement procedures, foreground and background traffic, and scalable configuration in the

  10. Provenance in Data Interoperability for Multi-Sensor Intercomparison

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Leptoukh, G.; Berrick, S.; Shen, S.; Prados, A.; Fox, P.; Yang, W.; Min, M.; Holloway, D.; Enloe, Y.

    2008-12-01

    As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This implies a need for deeper data interoperability than we have now. Many efforts (e.g. OPeNDAP, Open Geospatial Consortium) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross- calibrated, validated, inter-compared and fused. We must determine how to match up data sets that are related, yet different in significant ways: the exact nature of the phenomenon being measured, measurement technique, exact location in space-time, or the quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, the results can be meaningless or even lead to an incorrect interpretation of the data. Most of these distinctions trace back to how the data came to be: sensors, processing, and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio-temporal aggregation, sampling issues, sensor biases, algorithm differences and/or calibration issues. This provenance information must therefore be captured in a semantic framework that allows sophisticated data inter-use tools to incorporate it, and eventually aid in the interpretation of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representations, and data quality representation in a well-structured, machine- readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance- related distinctions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance

  11. Provenance in Data Interoperability for Multi-Sensor Intercomparison

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Leptoukh, Greg; Berrick, Steve; Shen, Suhung; Prados, Ana; Fox, Peter; Yang, Wenli; Min, Min; Holloway, Dan; Enloe, Yonsook

    2008-01-01

    As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted

  12. Political, policy and social barriers to health system interoperability: emerging opportunities of Web 2.0 and 3.0.

    PubMed

    Juzwishin, Donald W M

    2009-01-01

    Achieving effective health informatics interoperability in a fragmented and uncoordinated health system is by definition not possible. Interoperability requires the simultaneous integration of health care processes and information across different types and levels of care (systems thinking). The fundamental argument of this paper is that information system interoperability will remain an unfulfilled hope until health reforms effectively address the governance (accountability), structural and process barriers to interoperability of health care delivery. The ascendency of Web 2.0 and 3.0, although still unproven, signals the opportunity to accelerate patients' access to health information and their health record. Policy suggestions for simultaneously advancing health system delivery and information system interoperability are posited. PMID:20166516

  13. Open-Source Semantic and Schematic Mediation in Hydrogeologic Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Boisvert, E.; Brodaric, B.

    2008-12-01

    A common task in cyber-based data environments, hydrogeologic or otherwise, is an initial search for data amongst distributed heterogeneous sources, followed by amalgamation of the multiple results into a single file organized using a common structure and perhaps standard content. For example, querying water well databases to obtain a list of the rock materials that occur beyond a certain ground depth, represented in some specific XML dialect. This task is often achieved with the aid of open geospatial technologies (OGC), which conveniently enable interoperability at the system and syntax levels by providing standard web service interfaces (WMS, WFS, WCS) and a standard data transfer language (GML). However, at present such technologies, which are mainly non-open source, provide minimal support for interoperating at the schematic and semantic levels, meaning it is difficult to query the data sources and obtain results in a common data structure populated with standard content. Classical data integration systems provide mediator and wrapper middleware to address this issue: mediators dispatch queries to distributed data repositories and integrate query results, while wrappers perform translation to common standards for both queries and results, and these actions are typically supported by ontologies. Under this classical scenario existing open geospatial services can be considered wrappers with minimal translation capacity, thus requiring a mediator to both integrate and translate. Consequently, we have used open source components to develop a re-usable mediator that operates as a virtual open geospatial web service (WFS), one that integrates and translates both query requests and results from OGC-wrapped data sources to common standards. The mediator is designed as a customizable XML processing pipeline that operates on declarative descriptions that support schematic and semantic translation. It is being implemented in virtual environments for hydrogeology to

  14. Before you make the data interoperable you have to make the people interoperable

    NASA Astrophysics Data System (ADS)

    Jackson, I.

    2008-12-01

    In February 2006 a deceptively simple concept was put forward. Could we use the International Year of Planet Earth 2008 as a stimulus to begin the creation of a digital geological map of the planet at a target scale of 1:1 million? Could we design and initiate a project that uniquely mobilises geological surveys around the world to act as the drivers and sustainable data providers of this global dataset? Further, could we synergistically use this geoscientist-friendly vehicle of creating a tangible geological map to accelerate progress of an emerging global geoscience data model and interchange standard? Finally, could we use the project to transfer know-how to developing countries and reduce the length and expense of their learning curve, while at the same time producing geoscience maps and data that could attract interest and investment? These aspirations, plus the chance to generate a global digital geological dataset to assist in the understanding of global environmental problems and the opportunity to raise the profile of geoscience as part of IYPE seemed more than enough reasons to take the proposal to the next stage. In March 2007, in Brighton, UK, 81 delegates from 43 countries gathered together to consider the creation of this global interoperable geological map dataset. The participants unanimously agreed the Brighton "Accord" and kicked off "OneGeology", an initiative that now has the support of more than 85 nations. Brighton was never designed to be a scientific or technical meeting: it was overtly about people and their interaction - would these delegates, with their diverse cultural and technical backgrounds, be prepared to work together to achieve something which, while technically challenging, was not complex in the context of leading edge geoscience informatics. Could we scale up what is a simple informatics model at national level, to deliver global coverage and access? The major challenges for OneGeology (and the deployment of interoperability

  15. Report on the Second Catalog Interoperability Workshop

    NASA Technical Reports Server (NTRS)

    Thieman, James R.; James, Mary E.

    1988-01-01

    The events, resolutions, and recommendations of the Second Catalog Interoperability Workshop, held at JPL in January, 1988, are discussed. This workshop dealt with the issues of standardization and communication among directories, catalogs, and inventories in the earth and space science data management environment. The Directory Interchange Format, being constructed as a standard for the exchange of directory information among participating data systems, is discussed. Involvement in the Interoperability effort by NASA, NOAA, ISGS, and NSF is described, and plans for future interoperability considered. The NASA Master Directory prototype is presented and critiqued and options for additional capabilities debated.

  16. Impact of coalition interoperability on PKI

    NASA Astrophysics Data System (ADS)

    Krall, Edward J.

    2003-07-01

    This paper examines methods for providing PKI interoperability among units of a coalition of armed forces drawn from different nations. The area in question is tactical identity management, for the purposes of confidentiality, integrity and non-repudiation in such a dynamic coalition. The interoperating applications under consideration range from email and other forms of store-and-forward messaging to TLS and IPSEC-protected real-time communications. Six interoperability architectures are examined with advantages and disadvantages of each described in the paper.

  17. Towards Model Driven Tool Interoperability: Bridging Eclipse and Microsoft Modeling Tools

    NASA Astrophysics Data System (ADS)

    Brunelière, Hugo; Cabot, Jordi; Clasen, Cauê; Jouault, Frédéric; Bézivin, Jean

    Successful application of model-driven engineering approaches requires interchanging a lot of relevant data among the tool ecosystem employed by an engineering team (e.g., requirements elicitation tools, several kinds of modeling tools, reverse engineering tools, development platforms and so on). Unfortunately, this is not a trivial task. Poor tool interoperability makes data interchange a challenge even among tools with a similar scope. This paper presents a model-based solution to overcome such interoperability issues. With our approach, the internal schema/s (i.e., metamodel/s) of each tool are explicited and used as basis for solving syntactic and semantic differences between the tools. Once the corresponding metamodels are aligned, model-to-model transformations are (semi)automatically derived and executed to perform the actual data interchange. We illustrate our approach by bridging the Eclipse and Microsoft (DSL Tools and SQL Server Modeling) modeling tools.

  18. Service Knowledge Spaces for Semantic Collaboration in Web-based Systems

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; de Antonellis, Valeria; Melchiori, Michele

    Semantic Web technologies have been applied to enable collaboration in open distributed systems, where interoperability issues raise due to the absence of a global view of the shared resources. Adoption of service-oriented technologies has improved interoperability at the application level by exporting systems functionalities as Web services. In fact, Service Oriented Architecture (SOA) constitutes an appropriate platform-independent approach to implement collaboration activities by means of automatic service discovery and composition. Recently, service discovery has been applied to collaborative environments such as the P2P one, where independent partners need cooperate through resource sharing without a stable network configuration and adopting different semantic models. Model-based techniques relying on Semantic Web need be defined to generate semantic service descriptions, allowing collaborative partners to export their functionalities in a semantic way. Semantic-based service matchmaking techniques are in charge of effectively and efficiently evaluating similarity between service requests and service offers in a huge, dynamic distributed environment. The result is an evolving service knowledge space where collaborative partners that provide similar services are semantically related and constitute synergic service centres in a given domain. Specific modeling requirements related to Semantic Web, service-oriented and P2P technologies must be considered.

  19. Semantic Web Service Framework to Intelligent Distributed Manufacturing

    SciTech Connect

    Kulvatunyou, Boonserm

    2005-12-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to develop products, production facilities, and supporting software rapidly and cost-effectively is becoming urgent. The use of a loosely integrated virtual enterprise based framework holds the potential of surviving changing market needs. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer that may or may not have prior relationship by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can interoperate widely in an unambiguous and autonomous manner. This contributes towards the realization of virtual enterprises at a low cost.

  20. Application-Level Interoperability Across Grids and Clouds

    NASA Astrophysics Data System (ADS)

    Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh

    Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.

  1. Preserved Musical Semantic Memory in Semantic Dementia

    PubMed Central

    Weinstein, Jessica; Koenig, Phyllis; Gunawardena, Delani; McMillan, Corey; Bonner, Michael; Grossman, Murray

    2012-01-01

    Objective To understand the scope of semantic impairment in semantic dementia. Design Case study. Setting Academic medical center. Patient A man with semantic dementia, as demonstrated by clinical, neuropsychological, and imaging studies. Main Outcome Measures Music performance and magnetic resonance imaging results. Results Despite profoundly impaired semantic memory for words and objects due to left temporal lobe atrophy, this semiprofessional musician was creative and expressive in demonstrating preserved musical knowledge. Conclusion Long-term representations of words and objects in semantic memory may be dissociated from meaningful knowledge in other domains, such as music. PMID:21320991

  2. River Basin Standards Interoperability Pilot

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Masó, Joan; Stasch, Christoph

    2016-04-01

    There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture

  3. Diabetes Device Interoperability for Improved Diabetes Management

    PubMed Central

    Silk, Alain D.

    2015-01-01

    Scientific and technological advancements have led to the increasing availability and use of sophisticated devices for diabetes management, with corresponding improvements in public health. These devices are often capable of sharing data with a few other specific devices but are generally not broadly interoperable; they cannot work together with a wide variety of other devices. As a result of limited interoperability, benefits of modern diabetes devices and potential for development of innovative new diabetes technologies are not being fully realized. Here we discuss diabetes device interoperability in general, then focus on 4 examples that show how diabetes management could benefit from enhanced interoperability: remote monitoring and data sharing, integrating data from multiple devices to better inform diabetes management strategies, device consolidation, and artificial pancreas development. PMID:26178738

  4. Reminiscing about 15 years of interoperability efforts

    DOE PAGESBeta

    Van de Sompel, Herbert; Nelson, Michael L.

    2015-11-01

    Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. In addition, we lament the lack of interoperability across nodes thatmore » play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.« less

  5. Reminiscing about 15 years of interoperability efforts

    SciTech Connect

    Van de Sompel, Herbert; Nelson, Michael L.

    2015-11-01

    Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. In addition, we lament the lack of interoperability across nodes that play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.

  6. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  7. GEOSS interoperability for Weather, Ocean and Water

    NASA Astrophysics Data System (ADS)

    Richardson, David; Nyenhuis, Michael; Zsoter, Ervin; Pappenberger, Florian

    2013-04-01

    "Understanding the Earth system — its weather, climate, oceans, atmosphere, water, land, geodynamics, natural resources, ecosystems, and natural and human-induced hazards — is crucial to enhancing human health, safety and welfare, alleviating human suffering including poverty, protecting the global environment, reducing disaster losses, and achieving sustainable development. Observations of the Earth system constitute critical input for advancing this understanding." With this in mind, the Group on Earth Observations (GEO) started implementing the Global Earth Observation System of Systems (GEOSS). GEOWOW, short for "GEOSS interoperability for Weather, Ocean and Water", is supporting this objective. GEOWOW's main challenge is to improve Earth observation data discovery, accessibility and exploitability, and to evolve GEOSS in terms of interoperability, standardization and functionality. One of the main goals behind the GEOWOW project is to demonstrate the value of the TIGGE archive in interdisciplinary applications, providing a vast amount of useful and easily accessible information to the users through the GEO Common Infrastructure (GCI). GEOWOW aims at developing funcionalities that will allow easy discovery, access and use of TIGGE archive data and of in-situ observations, e.g. from the Global Runoff Data Centre (GRDC), to support applications such as river discharge forecasting.TIGGE (THORPEX Interactive Grand Global Ensemble) is a key component of THORPEX: a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. The TIGGE archive consists of ensemble weather forecast data from ten global NWP centres, starting from October 2006, which has been made available for scientific research. The TIGGE archive has been used to analyse hydro-meteorological forecasts of flooding in Europe as well as in China. In general the analysis has been favourable in terms of

  8. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  9. Data interoperability software solution for emergency reaction in the Europe Union

    NASA Astrophysics Data System (ADS)

    Casado, R.; Rubiera, E.; Sacristan, M.; Schütte, F.; Peters, R.

    2015-07-01

    Emergency management becomes more challenging in international crisis episodes because of cultural, semantic and linguistic differences between all stakeholders, especially first responders. Misunderstandings between first responders makes decision making slower and more difficult. However, spread and development of networks and IT-based emergency management systems (EMSs) have improved emergency responses, which have become more coordinated. Despite improvements made in recent years, EMSs have not still solved problems related to cultural, semantic and linguistic differences which are the real cause of slower decision making. In addition, from a technical perspective, the consolidation of current EMSs and the different formats used to exchange information offers another problem to be solved in any solution proposed for information interoperability between heterogeneous EMSs in different contexts. To overcome these problems, we present a software solution based on semantic and mediation technologies. EMERGency ELements (EMERGEL) (Fundacion CTIC and AntwortING Ingenieurbüro PartG, 2013), a common and modular ontology shared by all the stakeholders, has been defined. It offers the best solution to gather all stakeholders' knowledge in a unique and flexible data model, taking into account different countries' cultural and linguistic issues. To deal with the diversity of data protocols and formats, we have designed a service-oriented architecture for data interoperability (named DISASTER: Data Interoperability Solution At STakeholders Emergency Reaction) providing a flexible extensible solution to solve the mediation issues. Web services have been adopted as specific technology to implement this paradigm that has the most significant academic and industrial visibility and attraction. Contributions of this work have been validated through the design and development of a cross-border realistic prototype scenario, actively involving both emergency managers and emergency

  10. Maturity Model for Advancing Smart Grid Interoperability

    SciTech Connect

    Knight, Mark; Widergren, Steven E.; Mater, J.; Montgomery, Austin

    2013-10-28

    Abstract—Interoperability is about the properties of devices and systems to connect and work properly. Advancing interoperability eases integration and maintenance of the resulting interconnection. This leads to faster integration, lower labor and component costs, predictability of projects and the resulting performance, and evolutionary paths for upgrade. When specifications are shared and standardized, competition and novel solutions can bring new value streams to the community of stakeholders involved. Advancing interoperability involves reaching agreement for how things join at their interfaces. The quality of the agreements and the alignment of parties involved in the agreement present challenges that are best met with process improvement techniques. The GridWise® Architecture Council (GWAC) sponsored by the United States Department of Energy is supporting an effort to use concepts from capability maturity models used in the software industry to advance interoperability of smart grid technology. An interoperability maturity model has been drafted and experience is being gained through trials on various types of projects and community efforts. This paper describes the value and objectives of maturity models, the nature of the interoperability maturity model and how it compares with other maturity models, and experiences gained with its use.

  11. A flexible integration framework for a Semantic Geospatial Web application

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Mei, Kun; Bian, Fuling

    2008-10-01

    With the growth of the World Wide Web technologies, the access to and use of geospatial information changed in the past decade radically. Previously, the data processed by a GIS as well as its methods had resided locally and contained information that was sufficiently unambiguous in the respective information community. Now, both data and methods may be retrieved and combined from anywhere in the world, escaping their local contexts. The last few years have seen a growing interest in the field of semantic geospatial web. With the development of semantic web technologies, we have seen the possibility of solving the heterogeneity/interoperation problem in the GIS community. The semantic geospatial web application can support a wide variety of tasks including data integration, interoperability, knowledge reuse, spatial reasoning and many others. This paper proposes a flexible framework called GeoSWF (short for Geospatial Semantic Web Framework), which supports the semantic integration of the distributed and heterogeneous geospatial information resources and also supports the semantic query and spatial relationship reasoning. We design the architecture of GeoSWF by extending the MVC Pattern. The GeoSWF use the geo-2007.owl proposed by W3C as the reference ontology of the geospatial information and design different application ontologies according to the situation of heterogeneous geospatial information resources. A Geospatial Ontology Creating Algorithm (GOCA) is designed for convert the geospatial information to the ontology instances represented by RDF/OWL. On the top of these ontology instances, the GeoSWF carry out the semantic reasoning by the rule set stored in the knowledge base to generate new system query. The query result will be ranking by ordering the Euclidean distance of each ontology instances. At last, the paper gives the conclusion and future work.

  12. SOLE: Applying Semantics and Social Web to Support Technology Enhanced Learning in Software Engineering

    NASA Astrophysics Data System (ADS)

    Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja

    eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.

  13. A Semantic Web Blackboard System

    NASA Astrophysics Data System (ADS)

    McKenzie, Craig; Preece, Alun; Gray, Peter

    In this paper, we propose a Blackboard Architecture as a means for coordinating hybrid reasoning over the Semantic Web. We describe the components of traditional blackboard systems (Knowledge Sources, Blackboard, Controller) and then explain how we have enhanced these by incorporating some of the principles of the Semantic Web to pro- duce our Semantic Web Blackboard. Much of the framework is already in place to facilitate our research: the communication protocol (HTTP); the data representation medium (RDF); a rich expressive description language (OWL); and a method of writing rules (SWRL). We further enhance this by adding our own constraint based formalism (CIF/SWRL) into the mix. We provide an example walk-though of our test-bed system, the AKTive Workgroup Builder and Blackboard(AWB+B), illustrating the interaction and cooperation of the Knowledge Sources and providing some context as to how the solution is achieved. We conclude with the strengths and weaknesses of the architecture.

  14. European Interoperability Assets Register and Quality Framework Implementation.

    PubMed

    Moreno-Conde, Alberto; Thienpont, Geert; Lamote, Inge; Coorevits, Pascal; Parra, Carlos; Kalra, Dipak

    2016-01-01

    Interoperability assets is the term applied to refer to any resource that can support the design, implementation and successful adoption of eHealth services that can exchange data meaningfully. Some examples may include functional requirements, specifications, standards, clinical models and term lists, guidance on how standards may be used concurrently, implementation guides, educational resources, and other resources. Unfortunately, these are largely accessible in ad hoc ways and result in scattered fragments of a solution space that urgently need to be brought together. At present, it is well known that new initiatives and projects will reinvent assets of which they were unaware, while those assets which were potentially of great value are forgotten, not maintained and eventually fall into disuse. This research has defined a quality in use model and assessed the suitability of this quality framework based on the feedback and opinion of a representative sample of potential end users. This quality framework covers the following domains of asset development and adoption: (i) Development process, (ii) Maturity level, (iii) Trustworthiness, (iv) Support & skills, (v) Sustainability, (vi) Semantic interoperability, (vii) Cost & effort of adoption (viii) Maintenance. When participants were requested to evaluate how the overall quality in use framework, 70% would recommend using the register to their colleagues, 70% felt that it could provide relevant benefits for discovering new assets, and 50% responded that it would support their decision making about the recommended asset to adopt or implement in their organisation. Several European projects have expressed interest in using the register, which will now be sustained and promoted by the the European Institute for Innovation through Health Data. PMID:27577473

  15. Consistent Inventories - the Largest Obstacle to Interoperable Data Systems (Invited)

    NASA Astrophysics Data System (ADS)

    Cornillon, P. C.; Gallagher, J.; Holloway, D.

    2010-12-01

    The OPeNDAP Data Access Protocol (DAP) provides interoperable access to a large number of Earth science data sets at the data level ; i.e., once a data archive of interest has been located and its structure determined, access to the data is relatively straightforward. At its core, the DAP is discipline neutral: it requires a rigid description of the syntax of the underlying data - the syntactic portion of the protocol - while allowing for the relatively unconstrained introduction of semantic information. The lack of consistent semantic information has proven to be an obstacle to the use of DAP accessible archives, but it is an obstacle that is being addressed with discipline specific metadata characterizations such as the NetCDF Climate and Forecast Metadata Conventions (CF) that are seeing increasing use in the Earth sciences. A more substantial obstacle to the use of Earth science data available via network based access protocols such as the DAP, is the lack of consistent standards for a description of the structure and contents of large archives, generally referred to as data set inventories. The syntactic description required by the DAP relates to the syntax of the data within a data object, the bundles transferred to the client system, not to the organization of bundles within the archive. As with the data themselves, the problem of inventories is both syntactic and semantic - consistent descriptions of the organizational structure of most Earth science archives do not exist nor do consistent descriptions of the contents of these archives exist, consistent descriptions that would allow the user to easily find data objects of interest. In this presentation we discuss the generation of consistent inventories developed by crawling OPeNDAP archives, methods of classifying the resulting Uniform Resource Locators (URLs) into ‘possible’ data sets and interactions with the data provider to refine the groups and to properly annotate them. The idea is to simplify

  16. Code lists for interoperability - Principles and best practices in INSPIRE

    NASA Astrophysics Data System (ADS)

    Lutz, M.; Portele, C.; Cox, S.; Murray, K.

    2012-04-01

    external vocabulary. In the former case, for each value, an external identifier, one or more labels (possibly in different languages), a definition and other metadata should be specified. In the latter case, the external vocabulary should be characterised, e.g. by specifying the version to be used, the format(s) in which the vocabulary is available, possible constraints (e.g. if only as specific part of the external list is to be used), rules for using values in the encoding of instance data, and the maintenance rules applied to the external vocabulary. This information is crucial for enabling implementation and interoperability in distributed systems (such as SDIs) and should be made available through a code list registry. While thus the information on allowed code list values is usually managed outside the UML application schema, we recommend inclusion of «codeList»-stereotyped classes in the model for semantic clarity. Information on the obligation, extensibility and a reference to the specified values should be provided through tagged values. Acknowledgements: The authors would like to thank the INSPIRE Thematic Working Groups, the Data Specifications Drafting Team and the JRC Contact Points for their contributions to the discussions on code lists in INSPIRE and to this abstract.

  17. Geo-Information Catalog Services Interoperability: an Experimented Tool

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Bigagli, L.; Mazzetti, P.; Mattia, U.

    2006-12-01

    Several geo-information communities (e.g. oceanography, atmospheric science, earth observation, etc.) have developed tailored metadata specifications for data discovery, evaluation and use. They conceived these models either profiling standard models (e.g. ISO 19115 metadata specification) or enriching existing and well- accepted data models (e.g. THREDDS/OPeNDAP/netCDF data model) in order to capture and describe more semantics. These metadata profiles have generated a set of related catalog services that characterize the different Communities, initiatives and projects (e.g. INSPIRE, MERSEA, LEAD, etc.). In addition, specific catalog services have been generated by profiling standard catalog services which were designed to accomplish the general requirements coming from the geo-information community (e.g. OGC CS-W). Indeed, to implement catalog services interoperability is a near-term challenge in support of fully functional and useful discovery and sharing infrastructures for spatial data. To implement catalog services interoperability requires metadata profiles harmonization and discovery protocols adaptation and mediation. In an over- simplified way, these solutions may be considered catalogue of catalogues or catalogue broker components. We conceived a solution for making several well-accepted catalogue services interoperable (e.g. OGC services, THREDDS, ESA EOLI, MERSEA CDI, etc.). This solution has been experimented as a stand-alone application tool, called GI-go. More recently, we re-engineered this approach as a service-oriented framework of modular components. We implemented a caching brokering catalog service which acts as a broker towards heterogeneous catalogues services dealing with IGCD (Imagery Gridded and Coverage Data). This service is called GI-cat; it implements metadata harmonization and discovery protocols adaptation. GI-cat supports query distribution allowing its clients to discover and evaluate the datasets, managed by the federated

  18. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  19. Gazetteer Brokering through Semantic Mediation

    NASA Astrophysics Data System (ADS)

    Hobona, G.; Bermudez, L. E.; Brackin, R.

    2013-12-01

    A gazetteer is a geographical directory containing some information regarding places. It provides names, location and other attributes for places which may include points of interest (e.g. buildings, oilfields and boreholes), and other features. These features can be published via web services conforming to the Gazetteer Application Profile of the Web Feature Service (WFS) standard of the Open Geospatial Consortium (OGC). Against the backdrop of advances in geophysical surveys, there has been a significant increase in the amount of data referenced to locations. Gazetteers services have played a significant role in facilitating access to such data, including through provision of specialized queries such as text, spatial and fuzzy search. Recent developments in the OGC have led to advances in gazetteers such as support for multilingualism, diacritics, and querying via advanced spatial constraints (e.g. search by radial search and nearest neighbor). A challenge remaining however, is that gazetteers produced by different organizations have typically been modeled differently. Inconsistencies from gazetteers produced by different organizations may include naming the same feature in a different way, naming the attributes differently, locating the feature in a different location, and providing fewer or more attributes than the other services. The Gazetteer application profile of the WFS is a starting point to address such inconsistencies by providing a standardized interface based on rules specified in ISO 19112, the international standard for spatial referencing by geographic identifiers. The profile, however, does not provide rules to deal with semantic inconsistencies. The USGS and NGA commissioned research into the potential for a Single Point of Entry Global Gazetteer (SPEGG). The research was conducted by the Cross Community Interoperability thread of the OGC testbed, referenced OWS-9. The testbed prototyped approaches for brokering gazetteers through use of semantic

  20. Social Semantics for an Effective Enterprise

    NASA Technical Reports Server (NTRS)

    Berndt, Sarah; Doane, Mike

    2012-01-01

    An evolution of the Semantic Web, the Social Semantic Web (s2w), facilitates knowledge sharing with "useful information based on human contributions, which gets better as more people participate." The s2w reaches beyond the search box to move us from a collection of hyperlinked facts, to meaningful, real time context. When focused through the lens of Enterprise Search, the Social Semantic Web facilitates the fluid transition of meaningful business information from the source to the user. It is the confluence of human thought and computer processing structured with the iterative application of taxonomies, folksonomies, ontologies, and metadata schemas. The importance and nuances of human interaction are often deemphasized when focusing on automatic generation of semantic markup, which results in dissatisfied users and unrealized return on investment. Users consistently qualify the value of information sets through the act of selection, making them the de facto stakeholders of the Social Semantic Web. Employers are the ultimate beneficiaries of s2w utilization with a better informed, more decisive workforce; one not achieved with an IT miracle technology, but by improved human-computer interactions. Johnson Space Center Taxonomist Sarah Berndt and Mike Doane, principal owner of Term Management, LLC discuss the planning, development, and maintenance stages for components of a semantic system while emphasizing the necessity of a Social Semantic Web for the Enterprise. Identification of risks and variables associated with layering the successful implementation of a semantic system are also modeled.

  1. An Integrated Model in E-Government Based on Semantic Web, Web Service and Intelligent Agent

    NASA Astrophysics Data System (ADS)

    Zhu, Hongtao; Su, Fangli

    One urgent problem in E-government service is to improve service efficiency through breaking information islands while constructing integrated service systems. Web Service provides a set of standards for the provision of functionality over the Web, and Web Service descriptions are pure syntactic instead of semantic content. Semantic Web provides interoperability from syntactic level to semantic one not only for human users but also for software agents. Semantic Web and Intelligent Agent are highly complementary, and the existing technologies have made their unification quite feasible, which brings about a good opportunity to the development of E-government. Based on Semantic Web and Intelligent Agent technologies an integrated service model of E-government is suggested in this paper.

  2. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  3. SEMANTICS AND CRITICAL READING.

    ERIC Educational Resources Information Center

    FLANIGAN, MICHAEL C.

    PROFICIENCY IN CRITICAL READING CAN BE ACCELERATED BY MAKING STUDENTS AWARE OF VARIOUS SEMANTIC DEVICES THAT HELP CLARIFY MEANINGS AND PURPOSES. EXCERPTS FROM THE ARTICLE "TEEN-AGE CORRUPTION" FROM THE NINTH-GRADE SEMANTICS UNIT WRITTEN BY THE PROJECT ENGLISH DEMONSTRATION CENTER AT EUCLID, OHIO, ARE USED TO ILLUSTRATE HOW SEMANTICS RELATE TO…

  4. The Open Physiology workflow: modeling processes over physiology circuitboards of interoperable tissue units

    PubMed Central

    de Bono, Bernard; Safaei, Soroush; Grenon, Pierre; Nickerson, David P.; Alexander, Samuel; Helvensteijn, Michiel; Kok, Joost N.; Kokash, Natallia; Wu, Alan; Yu, Tommy; Hunter, Peter; Baldock, Richard A.

    2015-01-01

    A key challenge for the physiology modeling community is to enable the searching, objective comparison and, ultimately, re-use of models and associated data that are interoperable in terms of their physiological meaning. In this work, we outline the development of a workflow to modularize the simulation of tissue-level processes in physiology. In particular, we show how, via this approach, we can systematically extract, parcellate and annotate tissue histology data to represent component units of tissue function. These functional units are semantically interoperable, in terms of their physiological meaning. In particular, they are interoperable with respect to [i] each other and with respect to [ii] a circuitboard representation of long-range advective routes of fluid flow over which to model long-range molecular exchange between these units. We exemplify this approach through the combination of models for physiology-based pharmacokinetics and pharmacodynamics to quantitatively depict biological mechanisms across multiple scales. Links to the data, models and software components that constitute this workflow are found at http://open-physiology.org/. PMID:25759670

  5. An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies

    PubMed Central

    Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos

    2016-01-01

    Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451

  6. An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies.

    PubMed

    Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos

    2016-01-01

    Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451

  7. Project Integration Architecture: Formulation of Semantic Parameters

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the intention to formulate parameter objects which convey meaningful semantic information. In so doing, it is expected that a level of automation can be achieved in the consumption of information content by PIA-consuming clients outside the programmatic boundary of a presenting PIA-wrapped application. This paper discusses the steps that have been recently taken in formulating such semantically-meaningful parameters.

  8. WS/PIDS: standard interoperable PIDS in web services environments.

    PubMed

    Vasilescu, E; Dorobanţu, M; Govoni, S; Padh, S; Mun, S K

    2008-01-01

    An electronic health record depends on the consistent handling of people's identities within and outside healthcare organizations. Currently, the Person Identification Service (PIDS), a CORBA specification, is the only well-researched standard that meets these needs. In this paper, we introduce WS/PIDS, a PIDS specification for Web Services (WS) that closely matches the original PIDS and improves on it by providing explicit support for medical multimedia attributes. WS/PIDS is currently supported by a test implementation, layered on top of a PIDS back-end, with Java- and NET-based, and Web clients. WS/PIDS is interoperable among platforms; it preserves PIDS semantics to a large extent, and it is intended to be fully compliant with established and emerging WS standards. The specification is open source and immediately usable in dynamic clinical systems participating in grid environments. WS/PIDS has been tested successfully with a comprehensive set of use cases, and it is being used in a clinical research setting. PMID:18270041

  9. The MED-SUV Multidisciplinary Interoperability Infrastructure

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; D'Auria, Luca; Reitano, Danilo; Papeschi, Fabrizio; Roncella, Roberto; Puglisi, Giuseppe; Nativi, Stefano

    2016-04-01

    In accordance with the international Supersite initiative concept, the MED-SUV (MEDiterranean SUpersite Volcanoes) European project (http://med-suv.eu/) aims to enable long-term monitoring experiment in two relevant geologically active regions of Europe prone to natural hazards: Mt. Vesuvio/Campi Flegrei and Mt. Etna. This objective requires the integration of existing components, such as monitoring systems and data bases and novel sensors for the measurements of volcanic parameters. Moreover, MED-SUV is also a direct contribution to the Global Earth Observation System of Systems (GEOSS) as one the volcano Supersites recognized by the Group on Earth Observation (GEO). To achieve its goal, MED-SUV set up an advanced e-infrastructure allowing the discovery of and access to heterogeneous data for multidisciplinary applications, and the integration with external systems like GEOSS. The MED-SUV overall infrastructure is conceived as a three layer architecture with the lower layer (Data level) including the identified relevant data sources, the mid-tier (Supersite level) including components for mediation and harmonization , and the upper tier (Global level) composed of the systems that MED-SUV must serve, such as GEOSS and possibly other global/community systems. The Data level is mostly composed of existing data sources, such as space agencies satellite data archives, the UNAVCO system, the INGV-Rome data service. They share data according to different specifications for metadata, data and service interfaces, and cannot be changed. Thus, the only relevant MED-SUV activity at this level was the creation of a MED-SUV local repository based on Web Accessible Folder (WAF) technology, deployed in the INGV site in Catania, and hosting in-situ data and products collected and generated during the project. The Supersite level is at the core of the MED-SUV architecture, since it must mediate between the disparate data sources in the layer below, and provide a harmonized view to

  10. EVA safety: Space suit system interoperability

    NASA Technical Reports Server (NTRS)

    Skoog, A. I.; McBarron, J. W.; Abramov, L. P.; Zvezda, A. O.

    1995-01-01

    The results and the recommendations of the International Academy of Astronautics extravehicular activities (IAA EVA) Committee work are presented. The IAA EVA protocols and operation were analyzed for harmonization procedures and for the standardization of safety critical and operationally important interfaces. The key role of EVA and how to improve the situation based on the identified EVA space suit system interoperability deficiencies were considered.

  11. Parallel mesh management using interoperable tools.

    SciTech Connect

    Tautges, Timothy James; Devine, Karen Dragon

    2010-10-01

    This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.

  12. Specific interoperability problems of security infrastructure services.

    PubMed

    Pharow, Peter; Blobel, Bernd

    2006-01-01

    Communication and co-operation in healthcare and welfare require a well-defined set of security services based on a standards-based interoperable security infrastructure and provided by a Trusted Third Party. Generally, the services describe status and relation of communicating principals, corresponding keys and attributes, and the access rights to both applications and data. Legal, social, behavioral and ethical requirements demand securely stored patient information and well-established access tools and tokens. Electronic signatures as means for securing integrity of messages and files, certified time stamps and time signatures are important for accessing and storing data in Electronic Health Record Systems. The key for all these services is a secure and reliable procedure for authentication (identification and verification). While mentioning technical problems (e.g. lifetime of the storage devices, migration of retrieval and presentation software), this paper aims at identifying harmonization and interoperability requirements of securing data items, files, messages, sets of archived items or documents, and life-long Electronic Health Records based on a secure certificate-based identification. It's commonly known that just relying on existing and emerging security standards does not necessarily guarantee interoperability of different security infrastructure approaches. So certificate separation can be a key to modern interoperable security infrastructure services. PMID:17095833

  13. GIS interoperability: current activities and military implications

    NASA Astrophysics Data System (ADS)

    Lam, Sylvia

    1997-07-01

    Geographic information systems (GIS) are gaining importance in military operations because of their capability to spatially and visually integrate various kinds of information. In an era of limited resources, geospatial data must be shared efficiently whenever possible. The military-initiated Global Geospatial Information and Services (GGI&S) Project aims at developing the infrastructure for GIS interoperability for the military. Current activities in standardization and new technology have strong implications on the design and development of GGI&S. To facilitate data interoperability at both the national and international levels, standards and specifications in geospatial data sharing are being studied, developed and promoted. Of particular interest to the military community are the activities related to the NATO DIGEST, ISO TC/211 Geomatics standardization and the industry-led Open Geodata Interoperability Specifications (OGIS). Together with new information technology, standardization provides the infrastructure for interoperable GIS for both civilian and military environments. The first part of this paper describes the major activities in standardization. The second part presents the technologies developed at DREV in support of the GGI&S. These include the Open Geospatial Datastore Interface (OGDI) and the geospatial data warehouse. DREV has been working closely with Defence Geomatics and private industry in the research and development of new technology for the GGI&S project.

  14. Smart Grid Interoperability Maturity Model Beta Version

    SciTech Connect

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  15. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  16. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  17. Interoperability Outlook in the Big Data Future

    NASA Astrophysics Data System (ADS)

    Kuo, K. S.; Ramachandran, R.

    2015-12-01

    The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center

  18. Enhancing Data Interoperability with Web Services

    NASA Astrophysics Data System (ADS)

    Shrestha, S. R.; Zimble, D. A.; Wang, W.; Herring, D.; Halpert, M.

    2014-12-01

    In an effort to improve data access and interoperability of climate and weather data, the National Oceanic and Atmospheric Administration's (NOAA) Climate.gov and Climate Prediction Center (CPC) are exploring various platform solutions to enhance a user's ability to locate, preview, and acquire the data. The Climate.gov and CPC data team faces multiple challenges including the various kinds of data and formats, inconsistency of metadata records, variety of data service implementations, very large volumes of data and geographically distributed locations. We have created the Data Access and Interoperability project to design a web-based platform, where interoperability between systems can be leveraged to allow greater data discovery, access, visualization and delivery. In the interoperable data platform, systems can integrate with each other to support the synthesis of climate and weather data. Interoperability is the ability for users to discover the available climate and weather data, preview and interact with the data, and acquire the data in common digital formats through a simple web-based interface. The goal of the interoperable data platform is to leverage existing web services, implement the established standards and integrate with existing solutions across the earth sciences domain instead of creating new technologies. Towards this effort to improve the interoperability of the platform, we are collaborating with ESRI Inc. to provide climate and weather data via web services. In this presentation, we will discuss and demonstrate how to use ArcGIS to author RESTful based scientific web services using open standards. These web services are able to encapsulate the logic required to handle and describe scientific data through a variety of service types including, image, map, feature, geoprocessing, and their respective service methods. Combining these types of services and leveraging well-documented APIs, including the ArcGIS JavaScript API, we can afford to

  19. Stuart Sutton, Associate Professor, University of Washington iSchool: From Discourse Communities to the Semantic Web.

    ERIC Educational Resources Information Center

    Forsythe, Kathleen

    2002-01-01

    In this interview Professor Stuart Sutton discusses proliferation of metadata schemas as an outgrowth of various discourse communities as they find their niche on the semantic Web. Highlights include interoperability; cataloging tools, including GEMCat; and the role of librarians and information science education in the development of Internet…

  20. THE Interoperability Challenge for the Geosciences: Stepping up from Interoperability between Disciplinary Siloes to Creating Transdisciplinary Data Platforms.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Trenham, C.; Druken, K. A.; Wang, J.

    2015-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) has collocated over 10 PB of national and international data assets within a HPC facility to create the National Environmental Research Data Interoperability Platform (NERDIP). The data span a wide range of fields from the earth systems and environment (climate, coasts, oceans, and geophysics) through to astronomy, bioinformatics, and the social sciences. These diverse data collections are collocated on a major data storage node that is linked to a Petascale HPC and Cloud facility. Users can search across all of the collections and either log in and access the data directly, or they can access the data via standards-based web services. These collocated petascale data collections are theoretically a massive resource for interdisciplinary science at scales and resolutions never hitherto possible. But once collocated, multiple barriers became apparent that make cross-domain data integration very difficult and often so time consuming, that either less ambitious research goals are attempted or the project is abandoned. Incompatible content is only one half of the problem: other showstoppers are differing access models, licences and issues of ownership of derived products. Brokers can enable interdisciplinary research but in reality are we just delaying the inevitable? A call to action is required adopt a transdiciplinary approach at the conception of development of new multi-disciplinary systems whereby those across all the scientific domains, the humanities, social sciences and beyond work together to create a unity of informatics plaforms that interoperate horizontally across the multiple discipline boundaries, and also operate vertically to enable a diversity of people to access data from high end researchers, to undergraduate, school students and the general public. Once we master such a transdisciplinary approach to our vast global information assets, we will then achieve

  1. The geographical ontology, LDAP, and the space information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Li, Deren

    2005-10-01

    The research purpose is to discuss the development trend and theory of the semantic integration and interoperability of Geography Information Systems on the network ages and to point out that the geography ontology is the foregone conclusion of the development of the semantic-based integration and interoperability of Geography Information Systems. After analyzing the effect by using the various new technologies, the paper proposes new idea for the family of the ontology class based on the GIS knowledge built here. They are the basic ontology, the domain ontology and the application ontology and are very useful for the sharing and transferring of the semantic information between the complicated distributed systems and object abstracting. The main contributions of the paper are as follows: 1) For the first time taking the ontology and LDAP (Lightweight Directory Access Protocol) in creating and optimizing the architecture of Spatial Information Gird and accelerating the fusion of Geography Information System and other domain's information systems. 2) For the first time, introducing a hybrid method to build geography ontology. This hybrid method mixes the excellence of the independent domain expert and data mining. It improves the efficiency of the method of the domain expert and builds ontology semi-automatically. 3) For the first time, implementing the many-to-many relationship of integration ontology system by LDAP's reference and creating ontology-based virtual organization that could provide transparent service to guests.

  2. Data interoperability software solution for emergency reaction in the Europe Union

    NASA Astrophysics Data System (ADS)

    Casado, R.; Rubiera, E.; Sacristan, M.; Schütte, F.; Peters, R.

    2014-09-01

    Emergency management becomes more challenging in international crisis episodes because of cultural, semantic and linguistic differences between all stakeholders, especially first responders. Misunderstandings between first responders makes decision-making slower and more difficult. However, spread and development of networks and IT-based Emergency Management Systems (EMS) has improved emergency responses, becoming more coordinated. Despite improvements made in recent years, EMS have not still solved problems related to cultural, semantic and linguistic differences which are the real cause of slower decision-making. In addition, from a technical perspective, the consolidation of current EMS and the different formats used to exchange information offers another problem to be solved in any solution proposed for information interoperability between heterogeneous EMS surrounded by different contexts. To overcome these problems we present a software solution based on semantic and mediation technologies. EMERGency ELements (EMERGEL) (Fundacion CTIC and AntwortING Ingenieurbüro PartG 2013), a common and modular ontology shared by all the stakeholders, has been defined. It offers the best solution to gather all stakeholders' knowledge in a unique and flexible data model, taking into account different countries cultural linguistic issues. To deal with the diversity of data protocols and formats, we have designed a Service Oriented Architecture for Data Interoperability (named DISASTER) providing a flexible extensible solution to solve the mediation issues. Web Services have been adopted as specific technology to implement such paradigm that has the most significant academic and industrial visibility and attraction. Contributions of this work have been validated through the design and development of a cross-border realistic prototype scenario, actively involving both emergency managers and emergency first responders: the Netherlands-Germany border fire.

  3. Semantic Mappings and Locality of Nursing Diagnostic Concepts in UMLS

    PubMed Central

    Kim, Tae Youn; Coenen, Amy; Hardiker, Nicholas

    2011-01-01

    One solution for enhancing the interoperability between nursing information systems, given the availability of multiple nursing terminologies, is to cross-map existing nursing concepts. The Unified Medical Language System (UMLS) developed and distributed by the National Library of Medicine (NLM) is a knowledge resource containing cross-mappings of various terminologies in a unified framework. While the knowledge resource has been available for the last two decades, little research on the representation of nursing terminologies in UMLS has been conducted. As a first step, UMLS semantic mappings and concept locality were examined for nursing diagnostic concepts or problems selected from three terminologies (i.e., CCC, ICNP, and NANDA-I) along with corresponding SNOMED CT concepts. The evaluation of UMLS semantic mappings was conducted by measuring the proportion of concordance between UMLS and human expert mappings. The semantic locality of nursing diagnostic concepts was assessed by examining the associations of select concepts and the placement of the nursing concepts on the Semantic Network and Group. The study found that the UMLS mappings of CCC and NANDA-I concepts to SNOMED CT were highly concordant to expert mappings. The level of concordance in mappings of ICNP to SNOMED CT, CCC and NANDA-I within UMLS was relatively low, indicating the need for further research and development. Likewise, the semantic locality of ICNP concepts could be further improved. Various stakeholders need to collaborate to enhance the NLM knowledge resource and the interoperability of nursing data within the discipline as well as across health-related disciplines. PMID:21951759

  4. Processing biological literature with customizable Web services supporting interoperable formats

    PubMed Central

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225

  5. Processing biological literature with customizable Web services supporting interoperable formats.

    PubMed

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225

  6. Tool interoperability in SSE OI 2.0

    NASA Technical Reports Server (NTRS)

    Carmody, C. L.; Shotton, C. T.

    1988-01-01

    This paper presents a review of the concept and implementation of tool interoperability in the Space Station Software Support Environment (SSE) OI 2.0. By first providing a description of SSE, the paper describes the problem at hand, that is; the nature of the SSE that gives rise to the requirement for interoperability--between SSE workstations and hence, between the tools which reside on the workstations. Specifically, word processor and graphic tool interoperability are discussed. The concept for interoperability that is implemented in OI 2.0 is described, as is an overview of the implementation strategy. Some of the significant challenges that the development team had to overcome to bring about interoperability are described, perhaps as a checklist, or warning, to others who would bring about tool interoperability. Lastly, plans to extend tool interoperability to a third class of tools in OI 3.0 are described.

  7. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    McGuinness, Deborah; Fox, Peter; Hendler, James

    2010-05-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF

  8. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; McGuinness, D. L.

    2009-12-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.

  9. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  10. Publishing Metadata Specifications to Support Interoperability

    NASA Astrophysics Data System (ADS)

    Hedley, Mark

    2013-04-01

    Publishing Metadata Specifications commonly involves a document being provided detailing the scheme to readers. This method of publishing can have significant impacts on the ease of interfacing to data using the defined scheme. There are particular impacts for developers and maintainers of data interface software implementations and interoperability systems, for example: transcription errors in interpreters and adapting tools to changes in metadata schemes. This presentation will investigate some of the considerations for publishing a metadata specification and the potential impacts of publishing options for developers working with file formats. It will present a possible solution to the challenges presented from a meteorology perspective and discuss how this approach may help to fundamentally reduce the barriers to interoperability between data formats.

  11. A Patient Safety Information Model for Interoperability.

    PubMed

    Rodrigues, Jean Marie; Dhingra-Kumar, Neelam; Schulz, Stefan; Souvignet, Julien

    2016-01-01

    Current systems that target Patient Safety (PS) like mandatory reporting systems and specific vigilance reporting systems share the same information types but are not interoperable. Ten years ago, WHO embarked on an international project to standardize quality management information systems for PS. The goal is to support interoperability between different systems in a country and to expand international sharing of data on quality and safety management particularly for less developed countries. Two approaches have been used: (i) a bottom-up one starting with existing national PS reporting and international or national vigilance systems, and (ii) a top-down approach that uses the Patient Safety Categorial Structure (PS-CAST) and the Basic Formal Ontology (BFO) upper level ontology versions 1 and 2. The output is currently tested as an integrated information system for quality and PS management in four WHO member states. PMID:27139388

  12. Network effects, cascades and CCP interoperability

    NASA Astrophysics Data System (ADS)

    Feng, Xiaobing; Hu, Haibo; Pritsker, Matthew

    2014-03-01

    To control counterparty risk, financial regulations such as the Dodd Frank Act are increasingly requiring standardized derivatives trades to be cleared by central counterparties (CCPs). It is anticipated that in the near-term future, CCPs across the world will be linked through interoperability agreements that facilitate risk-sharing but also serve as a conduit for transmitting shocks. This paper theoretically studies a network with CCPs that are linked through interoperability arrangements, and studies the properties of the network that contribute to cascading failures. The magnitude of the cascading is theoretically related to the strength of network linkages, the size of the network, the logistic mapping coefficient, a stochastic effect and CCP's defense lines. Simulations indicate that larger network effects increase systemic risk from cascading failures. The size of the network N raises the threshold value of shock sizes that are required to generate cascades. Hence, the larger the network, the more robust it will be.

  13. BioC interoperability track overview

    PubMed Central

    Comeau, Donald C.; Batista-Navarro, Riza Theresa; Dai, Hong-Jie; Islamaj Doğan, Rezarta; Jimeno Yepes, Antonio; Khare, Ritu; Lu, Zhiyong; Marques, Hernani; Mattingly, Carolyn J.; Neves, Mariana; Peng, Yifan; Rak, Rafal; Rinaldi, Fabio; Tsai, Richard Tzong-Han; Verspoor, Karin; Wiegers, Thomas C.; Wu, Cathy H.; Wilbur, W. John

    2014-01-01

    BioC is a new simple XML format for sharing biomedical text and annotations and libraries to read and write that format. This promotes the development of interoperable tools for natural language processing (NLP) of biomedical text. The interoperability track at the BioCreative IV workshop featured contributions using or highlighting the BioC format. These contributions included additional implementations of BioC, many new corpora in the format, biomedical NLP tools consuming and producing the format and online services using the format. The ease of use, broad support and rapidly growing number of tools demonstrate the need for and value of the BioC format. Database URL: http://bioc.sourceforge.net/ PMID:24980129

  14. Semantic prosody and judgment.

    PubMed

    Hauser, David J; Schwarz, Norbert

    2016-07-01

    Some words tend to co-occur exclusively with a positive or negative context in natural language use, even though such valence patterns are not dictated by definitions or are part of the words' core meaning. These words contain semantic prosody, a subtle valenced meaning derived from co-occurrence in language. As language and thought are heavily intertwined, we hypothesized that semantic prosody can affect evaluative inferences about related ambiguous concepts. Participants inferred that an ambiguous medical outcome was more negative when it was caused, a verb with negative semantic prosody, than when it was produced, a synonymous verb with no semantic prosody (Studies 1a, 1b). Participants completed sentence fragments in a manner consistent with semantic prosody (Study 2), and semantic prosody affected various other judgments in line with evaluative inferences (estimates of an event's likelihood in Study 3). Finally, semantic prosody elicited both positive and negative evaluations of outcomes across a large set of semantically prosodic verbs (Study 4). Thus, semantic prosody can exert a strong influence on evaluative judgment. (PsycINFO Database Record PMID:27243765

  15. Testbed for Satellite and Terrestrial Interoperability (TSTI)

    NASA Technical Reports Server (NTRS)

    Gary, J. Patrick

    1998-01-01

    Various issues associated with the "Testbed for Satellite and Terrestrial Interoperability (TSTI)" are presented in viewgraph form. Specific topics include: 1) General and specific scientific technical objectives; 2) ACTS experiment No. 118: 622 Mbps network tests between ATDNet and MAGIC via ACTS; 3) ATDNet SONET/ATM gigabit network; 4) Testbed infrastructure, collaborations and end sites in TSTI based evaluations; 5) the Trans-Pacific digital library experiment; and 6) ESDCD on-going network projects.

  16. Future Interoperability of Camp Protection Systems (FICAPS)

    NASA Astrophysics Data System (ADS)

    Caron, Sylvie; Gündisch, Rainer; Marchand, Alain; Stahl, Karl-Hermann

    2013-05-01

    The FICAPS Project has been established as a Project of the European Defence Agency based on an initiative of Germany and France. Goal of this Project was to derive Guidelines, which by a proper implementation in future developments improve Camp Protection Systems (CPS) by enabling and improving interoperability between Camp Protection Systems and its Equipments of different Nations involved in multinational missions. These Guidelines shall allow for: • Real-time information exchange between equipments and systems of different suppliers and nations (even via SatCom), • Quick and easy replacement of equipments (even of different Nations) at run-time in the field by means of plug and play capability, thus lowering the operational and logistic costs and making the system highly available, • Enhancement of system capabilities (open and modular systems) by adding new equipment with new capabilities (just plug-in, automatic adjustment of the HMI Human Machine Interface) without costly and time consuming validation and test on system level (validation and test can be done on Equipment level), Four scenarios have been identified to summarize the interoperability requirements from an operational viewpoint. To prove the definitions given in the Guideline Document, a French and a German Demonstration System, based on existing national assets, were realized. Demonstrations, showing the capabilities given by the defined interoperability requirements with respect to the operational scenarios, were performed. Demonstrations included remote control of a CPS by another CPS, remote sensor control (Electro-Optic/InfraRed EO/IR) and remote effector control. This capability can be applied to extend the protection area or to protect distant infrastructural assets Demonstrations have been performed. The required interoperability functionality was shown successfully. Even if the focus of the FICAPS project was on camp protection, the solution found is also appropriate for other

  17. Designing Interoperable Data Products with Community Conventions

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Jelenak, A.; Lee, H.

    2015-12-01

    The HDF Product Designer (HPD) is a cloud-based client-server collaboration tool that can bring existing netCDF-3/4/CF, HDF4/5, and HDF-EOS2/5 products together to create new interoperable data products that serve the needs of the Earth Science community. The tool is designed to reduce the burden of creating and storing data in standards-compliant, interoperable HDF5 files and lower the technical and programming skill threshold needed to design such products by providing a user interface that combines the netCDF-4/HDF5 interoperable feature set with applicable metadata conventions. Users can collaborate quickly to devise new HDF5 products while at the same time seamlessly incorporating the latest best practices and conventions in their community by importing existing data products. The tool also incorporates some expert system features through CLIPS, allowing custom approaches in the file design, as well as easy transfer of preferred conventions as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from any interested parties is always welcome.

  18. The Challenges of Interoperable Data Discovery

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie F.

    2005-01-01

    The Global Change Master Directory (GCMD) assists the oceanographic community in data discovery and access through its online metadata directory. The directory also offers data holders a means to post and search their oceanographic data through the GCMD portals, i.e. online customized subset metadata directories. The Gulf of Maine Ocean Data Partnership (GoMODP) has expressed interest in using the GCMD portals to increase the visibility of their data holding throughout the Gulf of Maine region and beyond. The purpose of the Gulf of Maine Ocean Data Partnership (GoMODP) is to "promote and coordinate the sharing, linking, electronic dissemination, and use of data on the Gulf of Maine region". The participants have decided that a "coordinated effort is needed to enable users throughout the Gulf of Maine region and beyond to discover and put to use the vast and growing quantities of data in their respective databases". GoMODP members have invited the GCMD to discuss further collaborations in view of this effort. This presentation. will focus on the GCMD GoMODP Portal - demonstrating its content and use for data discovery, and will discuss the challenges of interoperable data discovery. interoperability among metadata standards and vocabularies will be discussed. A short overview of the lessons learned at the Marine Metadata Interoperability (MMI) metadata workshop held in Boulder, Colorado on August 9-11, 2005 will be given.

  19. Interoperability of satellite-based augmentation systems for aircraft navigation

    NASA Astrophysics Data System (ADS)

    Dai, Donghai

    The Federal Aviation Administration (FAA) is pioneering a transformation of the national airspace system from its present ground based navigation and landing systems to a satellite based system using the Global Positioning System (GPS). To meet the critical safety-of-life aviation positioning requirements, a Satellite-Based Augmentation System (SBAS), the Wide Area Augmentation System (WAAS), is being implemented to support navigation for all phases of flight, including Category I precision approach. The system is designed to be used as a primary means of navigation, capable of meeting the Required Navigation Performance (RNP), and therefore must satisfy the accuracy, integrity, continuity and availability requirements. In recent years there has been international acceptance of Global Navigation Satellite Systems (GNSS), spurring widespread growth in the independent development of SBASs. Besides the FAA's WAAS, the European Geostationary Navigation Overlay Service System (EGNOS) and the Japan Civil Aviation Bureau's MTSAT-Satellite Augmentation System (MSAS) are also being actively developed. Although all of these SBASs can operate as stand-alone, regional systems, there is increasing interest in linking these SBASs together to reduce costs while improving service coverage. This research investigated the coverage and availability improvements due to cooperative efforts among regional SBAS networks. The primary goal was to identify the optimal interoperation strategies in terms of performance, complexity and practicality. The core algorithms associated with the most promising concepts were developed and demonstrated. Experimental verification of the most promising concepts was conducted using data collected from a joint international test between the National Satellite Test Bed (NSTB) and the EGNOS System Test Bed (ESTB). This research clearly shows that a simple switch between SBASs made by the airborne equipment is the most effective choice for achieving the

  20. Semantic Alignment between ICD-11 and SNOMED CT.

    PubMed

    Rodrigues, Jean-Marie; Robinson, David; Della Mea, Vincenzo; Campbell, James; Rector, Alan; Schulz, Stefan; Brear, Hazel; Üstün, Bedirhan; Spackman, Kent; Chute, Christopher G; Millar, Jane; Solbrig, Harold; Brand Persson, Kristina

    2015-01-01

    Due to fundamental differences in design and editorial policies, semantic interoperability between two de facto standard terminologies in the healthcare domain--the International Classification of Diseases (ICD) and SNOMED CT (SCT), requires combining two different approaches: (i) axiom-based, which states logically what is universally true, using an ontology language such as OWL; (ii) rule-based, expressed as queries on the axiom-based knowledge. We present the ICD-SCT harmonization process including: a) a new architecture for ICD-11, b) a protocol for the semantic alignment of ICD and SCT, and c) preliminary results of the alignment applied to more than half the domain currently covered by the draft ICD-11. PMID:26262160

  1. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  2. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  3. Communication: General Semantics Perspectives.

    ERIC Educational Resources Information Center

    Thayer, Lee, Ed.

    This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…

  4. The Semantic Learning Organization

    ERIC Educational Resources Information Center

    Sicilia, Miguel-Angel; Lytras, Miltiadis D.

    2005-01-01

    Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…

  5. Aging and Semantic Activation.

    ERIC Educational Resources Information Center

    Howard, Darlene V.

    Three studies tested the theory that long term memory consists of a semantically organized network of concept nodes interconnected by leveled associations or relations, and that when a stimulus is processed, the corresponding concept node is assumed to be temporarily activated and this activation spreads to nearby semantically related nodes. In…

  6. Developing enterprise collaboration: a methodology to implement and improve interoperability

    NASA Astrophysics Data System (ADS)

    Daclin, Nicolas; Chen, David; Vallespir, Bruno

    2016-06-01

    The aim of this article is to present a methodology for guiding enterprises to implement and improve interoperability. This methodology is based on three components: a framework of interoperability which structures specific solutions of interoperability and is composed of abstraction levels, views and approaches dimensions; a method to measure interoperability including interoperability before (maturity) and during (operational performances) a partnership; and a structured approach defining the steps of the methodology, from the expression of an enterprise's needs to implementation of solutions. The relationship which consistently relates these components forms the methodology and enables developing interoperability in a step-by-step manner. Each component of the methodology and the way it operates is presented. The overall approach is illustrated in a case study example on the basis of a process between a given company and its dealers. Conclusions and future perspectives are given at the end of the article.

  7. Toward an E-Government Semantic Platform

    NASA Astrophysics Data System (ADS)

    Sbodio, Marco Luca; Moulin, Claude; Benamou, Norbert; Barthès, Jean-Paul

    This chapter describes the major aspects of an e-government platform in which semantics underpins more traditional technologies in order to enable new capabilities and to overcome technical and cultural challenges. The design and development of such an e-government Semantic Platform has been conducted with the financial support of the European Commission through the Terregov research project: "Impact of e-government on Territorial Government Services" (Terregov 2008). The goal of this platform is to let local government and government agencies offer online access to their services in an interoperable way, and to allow them to participate in orchestrated processes involving services provided by multiple agencies. Implementing a business process through an electronic procedure is indeed a core goal in any networked organization. However, the field of e-government brings specific constraints to the operations allowed in procedures, especially concerning the flow of private citizens' data: because of legal reasons in most countries, such data are allowed to circulate only from agency to agency directly. In order to promote transparency and responsibility in e-government while respecting the specific constraints on data flows, Terregov supports the creation of centrally controlled orchestrated processes; while the cross agencies data flows are centrally managed, data flow directly across agencies.

  8. Semantically linking in silico cancer models.

    PubMed

    Johnson, David; Connor, Anthony J; McKeever, Steve; Wang, Zhihui; Deisboeck, Thomas S; Quaiser, Tom; Shochat, Eliezer

    2014-01-01

    Multiscale models are commonplace in cancer modeling, where individual models acting on different biological scales are combined within a single, cohesive modeling framework. However, model composition gives rise to challenges in understanding interfaces and interactions between them. Based on specific domain expertise, typically these computational models are developed by separate research groups using different methodologies, programming languages, and parameters. This paper introduces a graph-based model for semantically linking computational cancer models via domain graphs that can help us better understand and explore combinations of models spanning multiple biological scales. We take the data model encoded by TumorML, an XML-based markup language for storing cancer models in online repositories, and transpose its model description elements into a graph-based representation. By taking such an approach, we can link domain models, such as controlled vocabularies, taxonomic schemes, and ontologies, with cancer model descriptions to better understand and explore relationships between models. The union of these graphs creates a connected property graph that links cancer models by categorizations, by computational compatibility, and by semantic interoperability, yielding a framework in which opportunities for exploration and discovery of combinations of models become possible. PMID:25520553

  9. Semantically Linking In Silico Cancer Models

    PubMed Central

    Johnson, David; Connor, Anthony J; McKeever, Steve; Wang, Zhihui; Deisboeck, Thomas S; Quaiser, Tom; Shochat, Eliezer

    2014-01-01

    Multiscale models are commonplace in cancer modeling, where individual models acting on different biological scales are combined within a single, cohesive modeling framework. However, model composition gives rise to challenges in understanding interfaces and interactions between them. Based on specific domain expertise, typically these computational models are developed by separate research groups using different methodologies, programming languages, and parameters. This paper introduces a graph-based model for semantically linking computational cancer models via domain graphs that can help us better understand and explore combinations of models spanning multiple biological scales. We take the data model encoded by TumorML, an XML-based markup language for storing cancer models in online repositories, and transpose its model description elements into a graph-based representation. By taking such an approach, we can link domain models, such as controlled vocabularies, taxonomic schemes, and ontologies, with cancer model descriptions to better understand and explore relationships between models. The union of these graphs creates a connected property graph that links cancer models by categorizations, by computational compatibility, and by semantic interoperability, yielding a framework in which opportunities for exploration and discovery of combinations of models become possible. PMID:25520553

  10. Order Theoretical Semantic Recommendation

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.

    2013-07-23

    Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.

  11. Telemedicine system interoperability architecture: concept description and architecture overview.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  12. Semantics-informed cartography: the case of Piemonte Geological Map

    NASA Astrophysics Data System (ADS)

    Piana, Fabrizio; Lombardo, Vincenzo; Mimmo, Dario; Giardino, Marco; Fubelli, Giandomenico

    2016-04-01

    In modern digital geological maps, namely those supported by a large geo-database and devoted to dynamical, interactive representation on WMS-WebGIS services, there is the need to provide, in an explicit form, the geological assumptions used for the design and compilation of the database of the Map, and to get a definition and/or adoption of semantic representation and taxonomies, in order to achieve a formal and interoperable representation of the geologic knowledge. These approaches are fundamental for the integration and harmonisation of geological information and services across cultural (e.g. different scientific disciplines) and/or physical barriers (e.g. administrative boundaries). Initiatives such as GeoScience Markup Language (last version is GeoSciML 4.0, 2015, http://www.geosciml.org) and the INSPIRE "Data Specification on Geology" http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG) have been promoting information exchange of the geologic knowledge. Grounded on these standard vocabularies, schemas and data models, we provide a shared semantic classification of geological data referring to the study case of the synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap", developed by the CNR Institute of Geosciences and Earth Resources, Torino (CNR IGG TO) and hosted as a dynamical interactive map on the geoportal of ARPA Piemonte Environmental Agency. The Piemonte Geological Map is grounded on a regional-scale geo-database consisting of some hundreds of GeologicUnits whose thousands instances (Mapped Features, polygons geometry) widely occur in Piemonte region, and each one is bounded by GeologicStructures (Mapped Features, line geometry). GeologicUnits and GeologicStructures have been spatially

  13. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  14. Semantic Visualization Mapping for Illustrative Volume Visualization

    NASA Astrophysics Data System (ADS)

    Rautek, P.; Bruckner, S.; Gröller, M. E.

    2009-04-01

    Measured and simulated data is usually divided into several meaningful intervals that are relevant to the domain expert. Examples from medicine are the specific semantics for different measuring modalities. A PET scan of a brain measures brain activity. It shows regions of homogeneous activity that are labeled by experts with semantic values such as low brain activity or high brain activity. Diffusion MRI data provides information about the healthiness of tissue regions and is classified by experts with semantic values like healthy, diseased, or necrotic. Medical CT data encode the measured density values in Hounsfield units. Specific intervals of the Hounsfield scale refer to different tissue types like air, soft tissue, bone, contrast enhanced vessels, etc. However, the semantic parameters from expert domains are not necessarily used to describe a mapping between the volume attributes and visual appearance. Volume rendering techniques commonly map attributes of the underlying data on visual appearance via a transfer function. Transfer functions are a powerful tool to achieve various visualization mappings. The specification of transfer functions is a complex task. The user has to have expert knowledge about the underlying rendering technique to achieve the desired results. Especially the specification of higher-dimensional transfer functions is challenging. Common user interfaces provide methods to brush in two dimensions. While brushing is an intuitive method to select regions of interest or to specify features, user interfaces for higher-dimensions are more challenging and often non-intuitive. For seismic data the situation is even more difficult since the data typically consists of many more volumetric attributes than for example medical datasets. Scientific illustrators are experts in conveying information by visual means. They also make use of semantics in a natural way describing visual abstractions such as shading, tone, rendering style, saturation

  15. S3QL: A distributed domain specific language for controlled semantic integration of life sciences data

    PubMed Central

    2011-01-01

    Background The value and usefulness of data increases when it is explicitly interlinked with related data. This is the core principle of Linked Data. For life sciences researchers, harnessing the power of Linked Data to improve biological discovery is still challenged by a need to keep pace with rapidly evolving domains and requirements for collaboration and control as well as with the reference semantic web ontologies and standards. Knowledge organization systems (KOSs) can provide an abstraction for publishing biological discoveries as Linked Data without complicating transactions with contextual minutia such as provenance and access control. We have previously described the Simple Sloppy Semantic Database (S3DB) as an efficient model for creating knowledge organization systems using Linked Data best practices with explicit distinction between domain and instantiation and support for a permission control mechanism that automatically migrates between the two. In this report we present a domain specific language, the S3DB query language (S3QL), to operate on its underlying core model and facilitate management of Linked Data. Results Reflecting the data driven nature of our approach, S3QL has been implemented as an application programming interface for S3DB systems hosting biomedical data, and its syntax was subsequently generalized beyond the S3DB core model. This achievement is illustrated with the assembly of an S3QL query to manage entities from the Simple Knowledge Organization System. The illustrative use cases include gastrointestinal clinical trials, genomic characterization of cancer by The Cancer Genome Atlas (TCGA) and molecular epidemiology of infectious diseases. Conclusions S3QL was found to provide a convenient mechanism to represent context for interoperation between public and private datasets hosted at biomedical research institutions and linked data formalisms. PMID:21756325

  16. Chandrayaan-1 Data Interoperability using PDAP

    NASA Astrophysics Data System (ADS)

    Thakkar, Navita; Crichton, Daniel; Heather, David; Gopala Krishna, Barla; Srinivasan, T. P.; Prashar, Ajay

    Indian Space Science Data Center (ISSDC) at Bangalore is the custodian of all the data sets of the current and future science missions of ISRO.Chandrayaan-1 is the first among the planetary missions launched by ISRO. The data collected from all the instruments during the life time of Chandrayaan-1 is peer-reviewed and archived as a Long Term Archive(LTA)using the Planetary Data System standards (PDS 3) at the ISSDC. In order to increase the use of the data archived, it needs to be made accessible to the scientific community and academia in a seamless manner across the globe. The IPDA (International Planetary Data Alliance), among its objectives, has to allow the interoperability and interchange of planetary scientific data among the planetary community. It has recommended PDAP (Planetary Data Access Protocol) v1.0 for implementation as an interoperability protocol for accessing planetary data archives. PDAP is a simple protocol for retrieving planetary data from repositories through a uniform interface.PDAP compliance requires an access web service to be maintained with thecharacteristics of the Metadata Query web method and the Data Retrieval web method. The PDAP interface will provide the metadata services for Chandrayaan-1 datasets and return a list of candidate hits formatted as a VOTable. For each candidate hit, an access reference URL will is used to retrieve the real data.This will be integrated with the IPDA Registry and Search Services.This paper presents the prototype of interoperable systems for Chandrayaan-1 planetary datasets using PDAP.

  17. Interoperability challenges in river discharge modelling

    NASA Astrophysics Data System (ADS)

    Santoro, Mattia; Schlummer, Manuela; Andres, Volker; Jirka, Simon; Looser, Ulrich; Mladek, Richard; Pappenberger, Florian; Strauch, Adrian; Utech, Michael; Zsoter, Ervin

    2014-05-01

    River discharge is a critical water cycle variable, as it integrates all the processes (e.g. runoff and evapotranspiration) occurring within a river basin and provides a hydrological output variable that can be readily measured. Its prediction is of invaluable help for many water-related areas such as water resources assessment and management, as well as flood protection and disaster mitigation. Observations of river discharge are very important for the calibration and validation of hydrological or coupled land, atmosphere and ocean models . This requires the use of data from different scientific domains (Water, Weather, etc.). Typically, such data are provided using different technological solutions and formats. This complicates the integration of new hydrological data sources into application systems. Therefore, a considerable effort is often spent on data access issues instead of the actual scientific question. In the context of the FP7 funded project GEOWOW (GEOSS Interoperability for Weather, Ocean and Water), the "River Discharge" use scenario was developed in order to combine river discharge observations data from the Global Runoff Data Center (GRDC) database and model outputs produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) predicting river discharge based on weather forecast information. In this presentation we describe interoperability solutions which were adopted in order to address the technological challenges of the "River Discharge" use scenario: 1) Development of a Hydrology Profile for the OGC SOS 2.0 standard; 2) Enhancement of the GEO DAB (Discovery and Access Broker) to support the use scenario: 2.1) Develop new interoperability arrangements for GRDC and ECMWF capacities; 2.2) Select multiple time series for comparison. The development of the above functionalities and tools aims to respond to the need of Water and Weather scientists to assess river discharge forecasting models.

  18. Advancing translational research with the Semantic Web

    PubMed Central

    Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi

    2007-01-01

    Background A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of

  19. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  20. A Defense of Semantic Minimalism

    ERIC Educational Resources Information Center

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  1. Space network interoperability panel (SNIP) study

    NASA Astrophysics Data System (ADS)

    Fahnestock, Dale; Yamada, Shigeo; Hara, Hideo; Lenhart, Klaus; Ryan, Thomas

    1992-03-01

    The history and status of the SNIP study conducted by NASA, ESA, and NASDA are reviewed. Particular attention is given to data relay systems development plans; agency load situations; cross support; the top managers agreement about implementation of S-band interoperability and accelerating the K-alpha band high data rate exploration; testing of actual systems; NASA interim architecture for an S-band era system to make NASA spacecraft and TDRSS/TDRS-II compatible with ESA and NASDA systems; tropical rainfall measuring mission support; S-band cross support; and K-alpha band status.

  2. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  3. The importance of architectures for interoperability.

    PubMed

    Blobel, Bernd; Oemig, Frank

    2015-01-01

    The paradigm changes health systems are faced with result in highly complex and distributed systems requiring flexibility, autonomy, but first of all advanced interoperability. In that context, understanding the architecture of the system to be supported as well as the process to meet the intended business objectives is crucial. Unfortunately, there is a lot of confusion around the term architecture, which doesn't facilitate the integration of systems. Using a reference architecture model and framework, relevant existing architectural approaches are analyzed, compared and critically discussed, but also harmonized using a reference architectural model and framework. PMID:25980847

  4. Interoperability Standards for Medical Simulation Systems

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Diallo, Saikou Y.; Padilla, Jose J.

    2012-01-01

    The Modeling and Simulation Community successfully developed and applied interoperability standards like the Distributed Interactive Simulation (DIS) protocol (IEEE 1278) and the High Level Architecture (HLA) (IEEE 1516). These standards were applied for world-wide distributed simulation events for several years. However, this paper shows that some of the assumptions and constraints underlying the philosophy of these current standards are not valid for Medical Simulation Systems. This paper describes the standards, the philosophy and the limits for medical applications and recommends necessary extensions of the standards to support medical simulation.

  5. Master data directories and Catalog Interoperability

    NASA Technical Reports Server (NTRS)

    Thieman, J. R.

    1990-01-01

    While the 'Catalog Interoperability' (CI) project began as a NASA effort to facilitate identification, location, and access to data of interest to space and earth sciences researchers, it now has a membership encompassing numerous U.S. and international agencies as well as academic institutions. CI is creating a global network of interconnected directory, catalog, and inventory systems. Its directories contain brief summary information about data sets, and can either furnish automated links to other information systems yielding greater detail on matters of interest or indicate to whom requests for additional information can go.

  6. OTF CCSDS SM and C Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.

    2008-01-01

    A presentation is provided to demonstrate the interoperability between two space flight Mission Operation Centers (MOCs) and to emulate telemetry, actions, and alert flows between the two centers. One framework uses a COTS C31 system that uses CORBA to interface to the local OTF data network. The second framework relies on current Houston MCC frameworks and ad hoc clients. Messaging relies on SM and C MAL, Core and Common Service formats, while the transport layer uses AMS. A centralized SM and C Registry uses HTTP/XML for transport/encoding. The project's status and progress are reviewed.

  7. A health analytics semantic ETL service for obesity surveillance.

    PubMed

    Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G

    2015-01-01

    The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play. PMID:25991273

  8. Heterogeneity and Context in Semantic-Web-Enabled HCLS Systems

    NASA Astrophysics Data System (ADS)

    Zimmermann, Antoine; Sahay, Ratnesh; Fox, Ronan; Polleres, Axel

    The need for semantics preserving integration of complex data has been widely recognized in the healthcare domain. While standards such as Health Level Seven (HL7) have been developed in this direction, they have mostly been applied in limited, controlled environments, still being used incoherently across countries, organizations, or hospitals. In a more mobile and global society, data and knowledge are going to be commonly exchanged between various systems at Web scale. Specialists in this domain have increasingly argued in favor of using Semantic Web technologies for modeling healthcare data in a well formalized way. This paper provides a reality check in how far current Semantic Web standards can tackle interoperability issues arising in such systems driven by the modeling of concrete use cases on exchanging clinical data and practices. Recognizing the insufficiency of standard OWL to model our scenario, we survey theoretical approaches to extend OWL by modularity and context towards handling heterogeneity in Semantic-Web-enabled health care and life sciences (HCLS) systems. We come to the conclusion that none of these approaches addresses all of our use case heterogeneity aspects in its entirety. We finally sketch paths on how better approaches could be devised by combining several existing techniques.

  9. Electronic Healthcare Record and clinical research in cardiovascular radiology. HL7 CDA and CDISC ODM interoperability.

    PubMed

    El Fadly, A; Daniel, C; Bousquet, C; Dart, T; Lastic, P-Y; Degoulet, P

    2007-01-01

    Integrating clinical research data entry with patient care data entry is a challenging issue. At the G. Pompidou European Hospital (HEGP), cardiovascular radiology reports are captured twice, first in the Electronic Health Record (EHR) and then in a national clinical research server. Informatics standards are different for EHR (HL7 CDA) and clinical research (CDISC ODM). The objective of this work is to feed both the EHR and a Clinical Research Data Management System (CDMS) from a single multipurpose form. We adopted and compared two approaches. First approach consists in implementing the single "care-research" form within the EHR and aligning XML structures of HL7 CDA document and CDISC ODM message to export relevant data from EHR to CDMS. Second approach consists in displaying a single "care-research" XForms form within the EHR and generating both HL7 CDA document and CDISC message to feed both EHR and CDMS. The solution based on XForms avoids overloading both EHR and CDMS with irrelevant information. Beyond syntactic interoperability, a perspective is to address the issue of semantic interoperability between both domains. PMID:18693829

  10. 78 FR 46582 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ... COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council... Communications Commission's (FCC or Commission) Communications Security, Reliability, and Interoperability... to ensure the security, reliability, and interoperability of communications systems. On March...

  11. Intelligent Discovery for Learning Objects Using Semantic Web Technologies

    ERIC Educational Resources Information Center

    Hsu, I-Ching

    2012-01-01

    The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…

  12. A framework for semantic reconciliation of disparate earth observation thematic data

    NASA Astrophysics Data System (ADS)

    Durbha, S. S.; King, R. L.; Shah, V. P.; Younan, N. H.

    2009-04-01

    There is a growing demand for digital databases of topographic and thematic information for a multitude of applications in environmental management, and also in data integration and efficient updating of other spatially oriented data. These thematic data sets are highly heterogeneous in syntax, structure and semantics as they are produced and provided by a variety of agencies having different definitions, standards and applications of the data. In this paper, we focus on the semantic heterogeneity in thematic information sources, as it has been widely recognized that the semantic conflicts are responsible for the most serious data heterogeneity problems hindering the efficient interoperability between heterogeneous information sources. In particular, we focus on the semantic heterogeneities present in the land cover classification schemes corresponding to the global land cover characterization data. We propose a framework (semantics enabled thematic data Integration (SETI)) that describes in depth the methodology involved in the reconciliation of such semantic conflicts by adopting the emerging semantic web technologies. Ontologies were developed for the classification schemes and a shared-ontology approach for integrating the application level ontologies as described. We employ description logics (DL)-based reasoning on the terminological knowledge base developed for the land cover characterization which enables querying and retrieval that goes beyond keyword-based searches.

  13. Individualizing cancer care with interoperable information systems.

    PubMed

    McCormick, Kathleen A

    2009-01-01

    There are three levels of interoperable informatics that are co-occurring in the United States to link data to provide more comprehensive care to patients. One is the National Health Information Network (NHIN) that is establishing use case scenarios and standards for interoperability for patients with multiple conditions. The second is the National Cancer Institute's project that supports the enterprise work called the Cancer Bioinformatics Grid (caBIG) in linking clinical care with bioinformatics, tissue repositories, and imaging for patients with cancer. The third is in the area of translating the discoveries of biology to bedside care through the National Institutes of Health (NIH) translational research efforts to get these new biomedical and genomic discoveries in practice in multiple healthcare delivery environments. These developments are becoming global networks in the diagnosis and cure of cancer as the primary outcome. This paper describes the national efforts and the global connection to Europe through the caBIG program. The European program that is beginning to link to cancer research internationally is the National Cancer Research Institute (NCRI) in the United Kingdom. They are developing the NCRI Oncology Information Exchange (ONIX) to provide the cancer research community with the ability to share information. PMID:19592871

  14. Towards E-Society Policy Interoperability

    NASA Astrophysics Data System (ADS)

    Iannella, Renato

    The move towards the Policy-Oriented Web is destined to provide support for policy expression and management in the core web layers. One of the most promising areas that can drive this new technology adoption is e-Society communities. With so much user-generated content being shared by these social networks, there is the real danger that the implicit sharing rules that communities have developed over time will be lost in translation in the new digital communities. This will lead to a corresponding loss in confidence in e-Society sites. The Policy-Oriented Web attempts to turn the implicit into the explicit with a common framework for policy language interoperability and awareness. This paper reports on the policy driving factors from the Social Networks experiences using real-world use cases and scenarios. In particular, the key functions of policy-awareness - for privacy, rights, and identity - will be the driving force that enables the e-Society to appreciate new interoperable policy regimes.

  15. Food product tracing technology capabilities and interoperability.

    PubMed

    Bhatt, Tejas; Zhang, Jianrong Janet

    2013-12-01

    Despite the best efforts of food safety and food defense professionals, contaminated food continues to enter the food supply. It is imperative that contaminated food be removed from the supply chain as quickly as possible to protect public health and stabilize markets. To solve this problem, scores of technology companies purport to have the most effective, economical product tracing system. This study sought to compare and contrast the effectiveness of these systems at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. It also determined if these systems can work together to better secure the food supply (their interoperability). Institute of Food Technologists (IFT) hypothesized that when technology providers are given a full set of supply-chain data, even for a multi-ingredient product, their systems will generally be able to trace a contaminated product forward and backward through the supply chain. However, when provided with only a portion of supply-chain data, even for a product with a straightforward supply chain, it was expected that interoperability of the systems will be lacking and that there will be difficulty collaborating to identify sources and/or recipients of potentially contaminated product. IFT provided supply-chain data for one complex product to 9 product tracing technology providers, and then compared and contrasted their effectiveness at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. A vertically integrated foodservice restaurant agreed to work with IFT to secure data from its supply chain for both a multi-ingredient and a simpler product. Potential multi-ingredient products considered included canned tuna, supreme pizza, and beef tacos. IFT ensured that all supply-chain data collected did not include any proprietary information or information that would otherwise

  16. 47 CFR 27.75 - Basic interoperability requirement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Basic interoperability requirement. 27.75 Section 27.75 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS COMMUNICATIONS SERVICES Technical Standards § 27.75 Basic interoperability requirement. Link to an amendment published at 79...

  17. Interoperability of Demand Response Resources Demonstration in NY

    SciTech Connect

    Wellington, Andre

    2014-03-31

    The Interoperability of Demand Response Resources Demonstration in NY (Interoperability Project) was awarded to Con Edison in 2009. The objective of the project was to develop and demonstrate methodologies to enhance the ability of customer sited Demand Response resources to integrate more effectively with electric delivery companies and regional transmission organizations.

  18. 47 CFR 90.525 - Administration of interoperability channels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Frequencies in the 763-775 and 793-805 MHz Bands § 90.525 Administration of interoperability channels. (a) States are responsible for administration of the Interoperability channels in the 769-775 MHz and 799-805... in the 769-775 MHz and 799-805 MHz frequency bands without a specific authorization from...

  19. NASA's Geospatial Interoperability Office(GIO)Program

    NASA Technical Reports Server (NTRS)

    Weir, Patricia

    2004-01-01

    NASA produces vast amounts of information about the Earth from satellites, supercomputer models, and other sources. These data are most useful when made easily accessible to NASA researchers and scientists, to NASA's partner Federal Agencies, and to society as a whole. A NASA goal is to apply its data for knowledge gain, decision support and understanding of Earth, and other planetary systems. The NASA Earth Science Enterprise (ESE) Geospatial Interoperability Office (GIO) Program leads the development, promotion and implementation of information technology standards that accelerate and expand the delivery of NASA's Earth system science research through integrated systems solutions. Our overarching goal is to make it easy for decision-makers, scientists and citizens to use NASA's science information. NASA's Federal partners currently participate with NASA and one another in the development and implementation of geospatial standards to ensure the most efficient and effective access to one another's data. Through the GIO, NASA participates with its Federal partners in implementing interoperability standards in support of E-Gov and the associated President's Management Agenda initiatives by collaborating on standards development. Through partnerships with government, private industry, education and communities the GIO works towards enhancing the ESE Applications Division in the area of National Applications and decision support systems. The GIO provides geospatial standards leadership within NASA, represents NASA on the Federal Geographic Data Committee (FGDC) Coordination Working Group and chairs the FGDC's Geospatial Applications and Interoperability Working Group (GAI) and supports development and implementation efforts such as Earth Science Gateway (ESG), Space Time Tool Kit and Web Map Services (WMS) Global Mosaic. The GIO supports NASA in the collection and dissemination of geospatial interoperability standards needs and progress throughout the agency including

  20. Trusting Crowdsourced Geospatial Semantics

    NASA Astrophysics Data System (ADS)

    Goodhue, P.; McNair, H.; Reitsma, F.

    2015-08-01

    The degree of trust one can place in information is one of the foremost limitations of crowdsourced geospatial information. As with the development of web technologies, the increased prevalence of semantics associated with geospatial information has increased accessibility and functionality. Semantics also provides an opportunity to extend indicators of trust for crowdsourced geospatial information that have largely focused on spatio-temporal and social aspects of that information. Comparing a feature's intrinsic and extrinsic properties to associated ontologies provides a means of semantically assessing the trustworthiness of crowdsourced geospatial information. The application of this approach to unconstrained semantic submissions then allows for a detailed assessment of the trust of these features whilst maintaining the descriptive thoroughness this mode of information submission affords. The resulting trust rating then becomes an attribute of the feature, providing not only an indication as to the trustworthiness of a specific feature but is able to be aggregated across multiple features to illustrate the overall trustworthiness of a dataset.

  1. Algebraic Semantics for Narrative

    ERIC Educational Resources Information Center

    Kahn, E.

    1974-01-01

    This paper uses discussion of Edmund Spenser's "The Faerie Queene" to present a theoretical framework for explaining the semantics of narrative discourse. The algebraic theory of finite automata is used. (CK)

  2. Development of a Ground Water Data Portal for Interoperable Data Exchange within the U.S. National Ground Water Monitoring Network and Beyond

    NASA Astrophysics Data System (ADS)

    Booth, N. L.; Brodaric, B.; Lucido, J. M.; Kuo, I.; Boisvert, E.; Cunningham, W. L.

    2011-12-01

    using the OGC Sensor Observation Service (SOS) standard. Ground Water Markup Language (GWML) encodes well log, lithology and construction information and is exchanged using the OGC Web Feature Service (WFS) standard. Within the NGWMN Data Portal, data exchange between distributed data provider repositories is achieved through the use of these web services and a central mediation hub, which performs both format (syntactic) and nomenclature (semantic) mediation, conforming heterogeneous inputs into common standards-based outputs. Through these common standards, interoperability between the U.S. NGWMN and Canada's Groundwater Information Network (GIN) is achieved, advancing a ground water virtual observatory across North America.

  3. Are Meaningful Use Stage 2 certified EHRs ready for interoperability? Findings from the SMART C-CDA Collaborative

    PubMed Central

    D'Amore, John D; Mandel, Joshua C; Kreda, David A; Swain, Ashley; Koromia, George A; Sundareswaran, Sumesh; Alschuler, Liora; Dolin, Robert H; Mandl, Kenneth D; Kohane, Isaac S; Ramoni, Rachel B

    2014-01-01

    Background and objective Upgrades to electronic health record (EHR) systems scheduled to be introduced in the USA in 2014 will advance document interoperability between care providers. Specifically, the second stage of the federal incentive program for EHR adoption, known as Meaningful Use, requires use of the Consolidated Clinical Document Architecture (C-CDA) for document exchange. In an effort to examine and improve C-CDA based exchange, the SMART (Substitutable Medical Applications and Reusable Technology) C-CDA Collaborative brought together a group of certified EHR and other health information technology vendors. Materials and methods We examined the machine-readable content of collected samples for semantic correctness and consistency. This included parsing with the open-source BlueButton.js tool, testing with a validator used in EHR certification, scoring with an automated open-source tool, and manual inspection. We also conducted group and individual review sessions with participating vendors to understand their interpretation of C-CDA specifications and requirements. Results We contacted 107 health information technology organizations and collected 91 C-CDA sample documents from 21 distinct technologies. Manual and automated document inspection led to 615 observations of errors and data expression variation across represented technologies. Based upon our analysis and vendor discussions, we identified 11 specific areas that represent relevant barriers to the interoperability of C-CDA documents. Conclusions We identified errors and permissible heterogeneity in C-CDA documents that will limit semantic interoperability. Our findings also point to several practical opportunities to improve C-CDA document quality and exchange in the coming years. PMID:24970839

  4. Secure Interoperable Open Smart Grid Demonstration Project

    SciTech Connect

    Magee, Thoman

    2014-12-31

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), battery storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG

  5. NASA and Industry Benefits of ACTS High Speed Network Interoperability Experiments

    NASA Technical Reports Server (NTRS)

    Zernic, M. J.; Beering, D. R.; Brooks, D. E.

    2000-01-01

    This paper provides synopses of the design. implementation, and results of key high data rate communications experiments utilizing the technologies of NASA's Advanced Communications Technology Satellite (ACTS). Specifically, the network protocol and interoperability performance aspects will be highlighted. The objectives of these key experiments will be discussed in their relevant context to NASA missions, as well as, to the comprehensive communications industry. Discussion of the experiment implementation will highlight the technical aspects of hybrid network connectivity, a variety of high-speed interoperability architectures, a variety of network node platforms, protocol layers, internet-based applications, and new work focused on distinguishing between link errors and congestion. In addition, this paper describes the impact of leveraging government-industry partnerships to achieve technical progress and forge synergistic relationships. These relationships will be the key to success as NASA seeks to combine commercially available technology with its own internal technology developments to realize more robust and cost effective communications for space operations.

  6. Coalition readiness management system preliminary interoperability experiment (CReaMS PIE)

    NASA Astrophysics Data System (ADS)

    Clark, Peter; Ryan, Peter; Zalcman, Lucien; Robbie, Andrew

    2003-09-01

    The United States Navy (USN) has initiated the Coalition Readiness Management System (CReaMS) Initiative to enhance coalition warfighting readiness through advancing development of a team interoperability training and combined mission rehearsal capability. It integrates evolving cognitive team learning principles and processes with advanced technology innovations to produce an effective and efficient team learning environment. The JOint Air Navy Networking Environment (JOANNE) forms the Australian component of CReaMS. The ultimate goal is to link Australian Defence simulation systems with the USN Battle Force Tactical Training (BFTT) system to demonstrate and achieve coalition level warfare training in a synthetic battlespace. This paper discusses the initial Preliminary Interoperability Experiment (PIE) involving USN and Australian Defence establishments.

  7. Interoperability between biomedical ontologies through relation expansion, upper-level ontologies and automatic reasoning.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V

    2011-01-01

    Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies. PMID:21789201

  8. Semantic Services for Wikipedia

    NASA Astrophysics Data System (ADS)

    Wang, Haofen; Penin, Thomas; Fu, Linyun; Liu, Qiaoling; Xue, Guirong; Yu, Yong

    Wikipedia, a killer application in Web 2.0, has embraced the power of collaborative editing to harness collective intelligence. It features many attractive characteristics, like entity-based link graph, abundant categorization and semi-structured layout, and can serve as an ideal data source to extract high quality and well-structured data. In this chapter, we first propose several solutions to extract knowledge from Wikipedia. We do not only consider information from the relational summaries of articles (infoboxes) but also semi-automatically extract it from the article text using the structured content available. Due to differences with information extraction from the Web, it is necessary to tackle new problems, like the lack of redundancy in Wikipedia that is dealt with by extending traditional machine learning algorithms to work with few labeled data. Furthermore, we also exploit the widespread categories as a complementary way to discover additional knowledge. Benefiting from both structured and textural information, we additionally provide a suggestion service for Wikipedia authoring. With the aim to facilitate semantic reuse, our proposal provides users with facilities such as link, categories and infobox content suggestions. The proposed enhancements can be applied to attract more contributors and lighten the burden of professional editors. Finally, we developed an enhanced search system, which can ease the process of exploiting Wikipedia. To provide a user-friendly interface, it extends the faceted search interface with relation navigation and let the user easily express his complex information needs in an interactive way. In order to achieve efficient query answering, it extends scalable IR engines to index and search both the textual and structured information with an integrated ranking support.

  9. SHARP/PRONGHORN Interoperability: Mesh Generation

    SciTech Connect

    Avery Bingham; Javier Ortensi

    2012-09-01

    Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.

  10. Interoperability in encoded quantum repeater networks

    NASA Astrophysics Data System (ADS)

    Nagayama, Shota; Choi, Byung-Soo; Devitt, Simon; Suzuki, Shigeya; Van Meter, Rodney

    2016-04-01

    The future of quantum repeater networking will require interoperability between various error-correcting codes. A few specific code conversions and even a generalized method are known, however, no detailed analysis of these techniques in the context of quantum networking has been performed. In this paper we analyze a generalized procedure to create Bell pairs encoded heterogeneously between two separate codes used often in error-corrected quantum repeater network designs. We begin with a physical Bell pair and then encode each qubit in a different error-correcting code, using entanglement purification to increase the fidelity. We investigate three separate protocols for preparing the purified encoded Bell pair. We calculate the error probability of those schemes between the Steane [[7,1,3

  11. Flexible solution for interoperable cloud healthcare systems.

    PubMed

    Vida, Mihaela Marcella; Lupşe, Oana Sorina; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena

    2012-01-01

    It is extremely important for the healthcare domain to have a standardized communication because will improve the quality of information and in the end the resulting benefits will improve the quality of patients' life. The standards proposed to be used are: HL7 CDA and CCD. For a better access to the medical data a solution based on cloud computing (CC) is investigated. CC is a technology that supports flexibility, seamless care, and reduced costs of the medical act. To ensure interoperability between healthcare information systems a solution creating a Web Custom Control is presented. The control shows the database tables and fields used to configure the two standards. This control will facilitate the work of the medical staff and hospital administrators, because they can configure the local system easily and prepare it for communication with other systems. The resulted information will have a higher quality and will provide knowledge that will support better patient management and diagnosis. PMID:22874196

  12. Managing interoperability and complexity in health systems.

    PubMed

    Bouamrane, M-M; Tao, C; Sarkar, I N

    2015-01-01

    In recent years, we have witnessed substantial progress in the use of clinical informatics systems to support clinicians during episodes of care, manage specialised domain knowledge, perform complex clinical data analysis and improve the management of health organisations' resources. However, the vision of fully integrated health information eco-systems, which provide relevant information and useful knowledge at the point-of-care, remains elusive. This journal Focus Theme reviews some of the enduring challenges of interoperability and complexity in clinical informatics systems. Furthermore, a range of approaches are proposed in order to address, harness and resolve some of the many remaining issues towards a greater integration of health information systems and extraction of useful or new knowledge from heterogeneous electronic data repositories. PMID:25579862

  13. Issues in PCS interoperability and Internetworking

    NASA Technical Reports Server (NTRS)

    Dean, Richard A.; Estabrook, Polly

    1995-01-01

    This paper is an expansion of an earlier paper on Satellite/Terrestrial PCS which addressed issues for interoperability that included Networks, Services, Voice Coders and Mobility/Security. This paper focuses on the narrower topic of Network Reference Models and associated interfaces and protocols. The network reference models are addressed from the perspective of the User, the Cellular Carrier, the PSN Carrier, and the Radio Vendor. Each perspective is presented in the way these systems have evolved. The TIA TR46/GSM reference model will be reviewed. Variations in the use of this model that are prevalent in the industry will be discussed. These are the North American Cellular networks, the GSM networks, and the North American Carriers/Bellcore perspective. The paper concludes with the presentation of issues that develop from looking at merging satellite service into a world of many different networks.

  14. Biodiversity information platforms: From standards to interoperability.

    PubMed

    Berendsohn, W G; Güntsch, A; Hoffmann, N; Kohlbecker, A; Luther, K; Müller, A

    2011-01-01

    One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems.Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols.The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure. PMID:22207807

  15. Biodiversity information platforms: From standards to interoperability

    PubMed Central

    Berendsohn, W. G.; Güntsch, A.; Hoffmann, N.; Kohlbecker, A.; Luther, K.; Müller, A.

    2011-01-01

    Abstract One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems.Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols.The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure. PMID:22207807

  16. Semantics in NETMAR (open service NETwork for MARine environmental data)

    NASA Astrophysics Data System (ADS)

    Leadbetter, Adam; Lowry, Roy; Clements, Oliver

    2010-05-01

    Over recent years, there has been a proliferation of environmental data portals utilising a wide range of systems and services, many of which cannot interoperate. The European Union Framework 7 project NETMAR (that commenced February 2010) aims to provide a toolkit for building such portals in a coherent manner through the use of chained Open Geospatial Consortium Web Services (WxS), OPeNDAP file access and W3C standards controlled by a Business Process Execution Language workflow. As such, the end product will be configurable by user communities interested in developing a portal for marine environmental data, and will offer search, download and integration tools for a range of satellite, model and observed data from open ocean and coastal areas. Further processing of these data will also be available in order to provide statistics and derived products suitable for decision making in the chosen environmental domain. In order to make the resulting portals truly interoperable, the NETMAR programme requires a detailed definition of the semantics of the services being called and the data which are being requested. A key goal of the NETMAR programme is, therefore, to develop a multi-domain and multilingual ontology of marine data and services. This will allow searches across both human languages and across scientific domains. The approach taken will be to analyse existing semantic resources and provide mappings between them, gluing together the definitions, semantics and workflows of the WxS services. The mappings between terms aim to be more general than the standard "narrower than", "broader than" type seen in the thesauri or simple ontologies implemented by previous programmes. Tools for the development and population of ontologoies will also be provided by NETMAR as there will be instances in which existing resources cannot sufficiently describe newly encountered data or services.

  17. Supervised learning of semantic classes for image annotation and retrieval.

    PubMed

    Carneiro, Gustavo; Chan, Antoni B; Moreno, Pedro J; Vasconcelos, Nuno

    2007-03-01

    A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning. PMID:17224611

  18. A Relation Routing Scheme for Distributed Semantic Media Query

    PubMed Central

    Liao, Zhuhua; Zhang, Guoqiang; Yi, Aiping; Zhang, Guoqing; Liang, Wei

    2013-01-01

    Performing complex semantic queries over large-scale distributed media contents is a challenging task for rich media applications. The dynamics and openness of data sources make it uneasy to realize a query scheme that simultaneously achieves precision, scalability, and reliability. In this paper, a novel relation routing scheme (RRS) is proposed by renovating the routing model of Content Centric Network (CCN) for directly querying large-scale semantic media content. By using proper query model and routing mechanism, semantic queries with complex relation constrains from users can be guided towards potential media sources through semantic guider nodes. The scattered and fragmented query results can be integrated on their way back for semantic needs or to avoid duplication. Several new techniques, such as semantic-based naming, incomplete response avoidance, timeout checking, and semantic integration, are developed in this paper to improve the accuracy, efficiency, and practicality of the proposed approach. Both analytical and experimental results show that the proposed scheme is a promising and effective solution for complex semantic queries and integration over large-scale networks. PMID:24319383

  19. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems. PMID:23247259

  20. A semantically-aided approach for online annotation and retrieval of medical images.

    PubMed

    Kyriazos, George K; Gerostathopoulos, Ilias Th; Kolias, Vassileios D; Stoitsis, John S; Nikita, Konstantina S

    2011-01-01

    The need for annotating the continuously increasing volume of medical image data is recognized from medical experts for a variety of purposes, regardless if this is medical practice, research or education. The rich information content latent in medical images can be made explicit and formal with the use of well-defined ontologies. Evolution of the Semantic Web now offers a unique opportunity of a web-based, service-oriented approach. Remote access to FMA and ICD-10 reference ontologies provides the ontological annotation framework. The proposed system utilizes this infrastructure to provide a customizable and robust annotation procedure. It also provides an intelligent search mechanism indicating the advantages of semantic over keyword search. The common representation layer discussed facilitates interoperability between institutions and systems, while semantic content enables inference and knowledge integration. PMID:22254818

  1. Enabling interoperability in Geoscience with GI-suite

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Papeschi, Fabrizio; Santoro, Mattia; Nativi, Stefano

    2015-04-01

    GI-suite is a brokering framework targeting interoperability of heterogeneous systems in the Geoscience domain. The framework is composed by different brokers each one focusing on a specific functionality: discovery, access and semantics (i.e. GI-cat, GI-axe, GI-sem). The brokering takes place between a set of heterogeneous publishing services and a set of heterogeneous consumer applications: the brokering target is represented by resources (e.g. coverages, features, or metadata information) required to seamlessly flow from the providers to the consumers. Different international and community standards are now supported by GI-suite, making possible the successful deployment of GI-suite in many international projects and initiatives (such as GEOSS, NSF BCube and several EU funded projects). As for the publisher side more than 40 standards and implementations are supported (e.g. Dublin Core, OAI-PMH, OGC W*S, Geonetwork, THREDDS Data Server, Hyrax Server, etc.). The support for each individual standard is provided by means of specific GI-suite components, called accessors. As for the consumer applications side more than 15 standards and implementations are supported (e.g. ESRI ArcGIS, Openlayers, OGC W*S, OAI-PMH clients, etc.). The support for each individual standard is provided by means of specific profiler components. The GI-suite can be used in different scenarios by different actors: - A data provider having a pre-existent data repository can deploy and configure GI-suite to broker it and making thus available its data resources through different protocols to many different users (e.g. for data discovery and/or data access) - A data consumer can use GI-suite to discover and/or access resources from a variety of publishing services that are already publishing data according to well-known standards. - A community can deploy and configure GI-suite to build a community (or project-specific) broker: GI-suite can broker a set of community related repositories and

  2. ICHI Categorial Structure: a WHO-FIC Tool for Semantic Interoperability of Procedures Classifications.

    PubMed

    Aljunid, Syed M; Rodrigues, Jean Marie; Best, Linda; Ahmed, Zafar; Reeza Mustaffa, Hasrul; Trombert, Béatrice; Hamzah Aljunid, Syed M; Souvignet, Julien; Kim, Sukil

    2015-01-01

    Casemix grouping using procedures classifications has become an important use case for health care terminologies. There are so many different national procedures classifications used for Casemix grouping that it is not possible to agree on a worldwide standard. ICHI (International Classification of Health Interventions) is proposing an approach that standardises only the terminologies' model structure. The poster shows the use of the ICHI alpha to replace ICD9 CM Volume 3 in the UNU-CBG International Casemix grouper. PMID:26262389

  3. Semantic Interoperable Electronic Patient Records: The Unfolding of Consensus based Archetypes.

    PubMed

    Pedersen, Rune; Wynn, Rolf; Ellingsen, Gunnar

    2015-01-01

    This paper is a status report from a large-scale openEHR-based EPR project from the North Norway Regional Health Authority encouraged by the unfolding of a national repository for openEHR archetypes. Clinicians need to engage in, and be responsible for the production of archetypes. The consensus processes have so far been challenged by a low number of active clinicians, a lack of critical specialties to reach consensus, and a cumbersome review process (3 or 4 review rounds) for each archetype. The goal is to have several clinicians from each specialty as a backup if one is hampered to participate. Archetypes and their importance for structured data and sharing of information has to become more visible for the clinicians through more sharpened information practice. PMID:25991124

  4. Interoperability in Collaborative Processes: Requirements Characterisation and Proof Approach

    NASA Astrophysics Data System (ADS)

    Roque, Matthieu; Chapurlat, Vincent

    Interoperability problems which can occur during the collaboration between several enterprises can endanger this collaboration. Consequently, it is necessary to become able to anticipate these problems. The proposed approach in this paper is based on the specification of properties, representing interoperability requirements, and their analysis on enterprise models. Due to the conceptual limits of existing modeling languages, formalizing these requirements and intending to translate them under the form of properties need to add conceptual enrichments to these languages. Finally, the analysis of the properties on enriched enterprise models, by formal checking techniques, aims to provide tools allowing to reasoning on enterprise models in order to detect interoperability problems, from an anticipative manner.

  5. Towards Interoperable Data Access through Climate.gov

    NASA Astrophysics Data System (ADS)

    Shrestha, S. R.; Marshall, J.; Stewart, J.; Ansari, S.; O'Brien, K.; Phillips, M. B.; Herring, D.

    2012-12-01

    The National Oceanic and Atmospheric Administration's (NOAA) Climate.gov team is enhancing users' ability to locate, preview, and acquire climate data. The Climate.gov team has created the Data Access and Interoperability project to design a web-based platform where interoperability between systems can be leveraged to allow greater data discovery, access, visualization and delivery. The team envisions an Interoperable Data Platform wherein systems can integrate with each other to support the synthesis of Climate data. Interoperability is the ability for users to discover the available climate data, preview and interact with the data, and acquire the data in common digital formats through a simple web-based interface. The Climate.gov Interoperable Data Platform uses the concepts of Representational State Transfer (REST) and common best practices for Web Services. Emerging standards for automation of machine-to-machine operations, such as OpenSearch autodiscovery, are being implemented throughout the Data Platform to ensure harmonization between data service providers, integrators and consumers. Implementation of common specifications will ensure compatibility between systems within NOAAand non-NOAA systems. The goal of the Interoperable Data Platform is to leverage existing web services, standards and existing solutions across the Earth sciences domain instead of creating new technologies. The Data Platform strives to become an integral part of the integration mechanisms supporting a system-of-systems ecosystem for Earth sciences information. As the team works across the organization, it will evaluate the capabilities of the participating systems to capture and assess the relative maturity of each system according to the Technology Infusion Working Group (TIWG) Interoperability Readiness Levels (IRL) as the reference for the interoperability mapping within NOAA. This will help establish the gaps and opportunities for integrating systems across a common set of

  6. Semantic aspects of the International Classification of Functioning, Disability and Health: towards sharing knowledge and unifying information.

    PubMed

    Andronache, Adrian Stefan; Simoncello, Andrea; Della Mea, Vincenzo; Daffara, Carlo; Francescutti, Carlo

    2012-02-01

    During the last decade, under the World Health Organization's direction, the International Classification of Functioning, Disability and Health (ICF) has become a reference tool for monitoring and developing various policies addressing people with disability. This article presents three steps to increase the semantic interoperability of ICF: first, the representation of ICF using ontology tools; second, the alignment to upper-level ontologies; and third, the use of these tools to implement semantic mappings between ICF and other tools, such as disability assessment instruments, health classifications, and at least partially formalized terminologies. PMID:22193319

  7. COEUS: “semantic web in a box” for biomedical applications

    PubMed Central

    2012-01-01

    Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467

  8. Towards Data Repository Interoperability: The Data Conservancy Data Packaging Specification

    NASA Astrophysics Data System (ADS)

    DiLauro, T.; Duerr, R.; Thessen, A. E.; Rippin, M.; Pralle, B.; Choudhury, G. S.

    2013-12-01

    description, the DCS instance will be able to provide default mappings for the directories and files within the package payload and enable support for deposited content at a lower level of service. Internally, the DCS will map these hybrid package serializations to its own internal business objects and their properties. Thus, this approach is highly extensible, as other packaging formats could be mapped in a similar manner. In addition, this scheme supports establishing the fixity of the payload while still supporting update of the semantic overlay data. This allows a data producer with scarce resources or an archivist who acquires a researcher's data to package the data for deposit with the intention of augmenting the resource description in the future. The Data Conservancy is partnering with the Sustainable Environment Actionable Data[4] project to test the interoperability of this new packaging mechanism. [1] Data Conservancy: http://dataconservancy.org/ [2] BagIt: https://datatracker.ietf.org/doc/draft-kunze-bagit/ [3] OAI-ORE: http://www.openarchives.org/ore/1.0/ [4] SEAD: http://sead-data.net/

  9. e-Science and biological pathway semantics

    PubMed Central

    Luciano, Joanne S; Stevens, Robert D

    2007-01-01

    Background The development of e-Science presents a major set of opportunities and challenges for the future progress of biological and life scientific research. Major new tools are required and corresponding demands are placed on the high-throughput data generated and used in these processes. Nowhere is the demand greater than in the semantic integration of these data. Semantic Web tools and technologies afford the chance to achieve this semantic integration. Since pathway knowledge is central to much of the scientific research today it is a good test-bed for semantic integration. Within the context of biological pathways, the BioPAX initiative, part of a broader movement towards the standardization and integration of life science databases, forms a necessary prerequisite for its successful application of e-Science in health care and life science research. This paper examines whether BioPAX, an effort to overcome the barrier of disparate and heterogeneous pathway data sources, addresses the needs of e-Science. Results We demonstrate how BioPAX pathway data can be used to ask and answer some useful biological questions. We find that BioPAX comes close to meeting a broad range of e-Science needs, but certain semantic weaknesses mean that these goals are missed. We make a series of recommendations for re-modeling some aspects of BioPAX to better meet these needs. Conclusion Once these semantic weaknesses are addressed, it will be possible to integrate pathway information in a manner that would be useful in e-Science. PMID:17493286

  10. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  11. Semantic Webs and Study Skills.

    ERIC Educational Resources Information Center

    Hoover, John J.; Rabideau, Debra K.

    1995-01-01

    Principles for ensuring effective use of semantic webbing in meeting study skill needs of students with learning problems are noted. Important study skills are listed, along with suggested semantic web topics for which subordinate ideas may be developed. Two semantic webs are presented, illustrating the study skills of multiple choice test-taking…

  12. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  13. Interoperability framework for communication between processes running on different mobile operating systems

    NASA Astrophysics Data System (ADS)

    Gal, A.; Filip, I.; Dragan, F.

    2016-02-01

    As we live in an era where mobile communication is everywhere around us, the necessity to communicate between the variety of the devices we have available becomes even more of an urge. The major impediment to be able to achieve communication between the available devices is the incompatibility between the operating systems running on these devices. In the present paper we propose a framework that will make possible the ability to inter-operate between processes running on different mobile operating systems. The interoperability process will make use of any communication environment which is made available by the mobile devices where the processes are installed. The communication environment is chosen so as the process is optimal in terms of transferring the data between the mobile devices. The paper defines the architecture of the framework, expanding the functionality and interrelation between modules that make up the framework. For the proof of concept, we propose to use three different mobile operating systems installed on three different types of mobile devices. Depending on the various factors related to the structure of the mobile devices and the data type to be transferred, the framework will establish a data transfer protocol that will be used. The framework automates the interoperability process, user intervention being limited to a simple selection from the options that the framework suggests based on the full analysis of structural and functional elements of the mobile devices used in the process.

  14. Using a single content model for eHealth interoperability and secondary use.

    PubMed

    Atalag, Koray

    2013-01-01

    This chapter describes a middle-out approach to eHealth interoperability, with strong oversight on public health and health research, enabled by a uniform and shared content model to which all health information exchange conforms. As described in New Zealand's Interoperability Reference Architecture, the content model borrows its top level organization from the Continuity of Care Record (CCR) standard and is underpinned by the openEHR formalism. This provides a canonical model for representing a variety of clinical information, and serves as reference when determining payload in health information exchange. The main premise of this approach is that since all exchanged data conforms to the same model, interoperability of clinical information can readily be achieved. Use of Archetypes ensures preservation of clinical context which is critical for secondary use. The content model is envisaged to grow incrementally by adding new or specialised archetypes as finer details are needed in real projects. The consistency and long term viability of this approach critically depends on effective governance which requires new models of collaboration, decision making and appropriate tooling to support the process. PMID:24018523

  15. Enabling Interoperable Space Robots With the Joint Technical Architecture for Robotic Systems (JTARS)

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville

    2005-01-01

    Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.

  16. Hera: Engineering Web Applications Using Semantic Web-based Models

    NASA Astrophysics Data System (ADS)

    van der Sluijs, Kees; Houben, Geert-Jan; Leonardi, Erwin; Hidders, Jan

    In this chapter, we consider the contribution of models and model-driven approaches based on Semantic Web for the development of Web applications. The model-driven web engineering approach, that separates concerns on different abstraction level in the application design process, allows for more robust and structural design of web applications. This is illustrated by the use of Hera, an approach from the class of Web engineering methods that relies on models expressed using RDF(S) and an RDF(S) query language. It illustrates how models and in particular models that fit with the ideas and concepts from the Semantic Web allow to approach the design and engineering of modern, open and heterogeneous Web based systems. In the presented approach, adaptation and personalization are a main aspect and it is illustrated how they are expressed using semantic data models and languages. Also specific features of Hera are discussed, like interoperability between applications in user modeling, aspect orientation in Web design and graphical tool support for Web application design.

  17. iPad: Semantic annotation and markup of radiological images.

    PubMed

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-01-01

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content. PMID:18999144

  18. iPad: Semantic Annotation and Markup of Radiological Images

    PubMed Central

    Rubin, Daniel L.; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-01-01

    Radiological images contain a wealth of information, such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats, enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content. PMID:18999144

  19. CCSDS SM and C Mission Operations Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Lucord, Steven A.

    2010-01-01

    This slide presentation reviews the prototype of the Spacecraft Monitor and Control (SM&C) Operations for interoperability among other space agencies. This particular prototype uses the German Space Agency (DLR) to test the ideas for interagency coordination.

  20. Interoperability of Repositories: The Simple Query Interface in ARIADNE

    ERIC Educational Resources Information Center

    Ternier, Stefaan; Duval, Erik

    2006-01-01

    This article reports on our experiences in providing interoperability between the ARIADNE knowledge pool system (KPS) (Duval, Forte, Cardinaels, Verhoeven, Van Durm, Hendrickx et al., 2001) and several other heterogeneous learning object repositories and referatories.

  1. Reuse and Interoperability of Avionics for Space Systems

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    2007-01-01

    The space environment presents unique challenges for avionics. Launch survivability, thermal management, radiation protection, and other factors are important for successful space designs. Many existing avionics designs use custom hardware and software to meet the requirements of space systems. Although some space vendors have moved more towards a standard product line approach to avionics, the space industry still lacks similar standards and common practices for avionics development. This lack of commonality manifests itself in limited reuse and a lack of interoperability. To address NASA s need for interoperable avionics that facilitate reuse, several hardware and software approaches are discussed. Experiences with existing space boards and the application of terrestrial standards is outlined. Enhancements and extensions to these standards are considered. A modular stack-based approach to space avionics is presented. Software and reconfigurable logic cores are considered for extending interoperability and reuse. Finally, some of the issues associated with the design of reusable interoperable avionics are discussed.

  2. A Proposed Information Architecture for Telehealth System Interoperability

    SciTech Connect

    Warren, S.; Craft, R.L.; Parks, R.C.; Gallagher, L.K.; Garcia, R.J.; Funkhouser, D.R.

    1999-04-07

    Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.

  3. Image sharing: evolving solutions in the age of interoperability.

    PubMed

    Mendelson, David S; Erickson, Bradley J; Choy, Garry

    2014-12-01

    Interoperability is a major focus of the quickly evolving world of Health IT. Easy, yet secure and confidential exchange of imaging exams and the associated reports must be a part of the solutions that are implemented. The availability of historical exams is essential in providing a quality interpretation and reducing inappropriate utilization of imaging services. Today, the exchange of imaging exams is most often achieved via a compact disc. We describe the virtues of this solution as well as challenges that have surfaced. Internet- and cloud-based technologies employed for many consumer services can provide a better solution. Vendors are making these solutions available. Standards for Internet-based exchange are emerging. Just as radiology converged on DICOM as a standard to store and view images, we need a common exchange standard. We will review the existing standards and how they are organized into useful workflows through Integrating the Healthcare Enterprise profiles. Integrating the Healthcare Enterprise and standards development processes are discussed. Health care and the domain of radiology must stay current with quickly evolving Internet standards. The successful use of the "cloud" will depend on both the technologies and the policies put into place around them, both of which we discuss. The radiology community must lead the way and provide a solution that works for radiologists and clinicians with use of the electronic medical record. We describe features we believe radiologists should consider when adding Internet-based exchange solutions to their practice. PMID:25467903

  4. MPEG-4 IPMP Extension for Interoperable Protection of Multimedia Content

    NASA Astrophysics Data System (ADS)

    Ji, Ming; Shen, SM; Zeng, Wenjun; Senoh, Taka; Ueno, Takafumi; Aoki, Terumasa; Hiroshi, Yasuda; Kogure, Takuyo

    2004-12-01

    To ensure secure content delivery, the Motion Picture Experts Group (MPEG) has dedicated significant effort to the digital rights management (DRM) issues. MPEG is now moving from defining only hooks to proprietary systems (e.g., in MPEG-2, MPEG-4 Version 1) to specifying a more encompassing standard in intellectual property management and protection (IPMP). MPEG feels that this is necessary in order to achieve MPEG's most important goal: interoperability. The design of the IPMP Extension framework also considers the complexity of the MPEG-4 standard and the diversity of its applications. This architecture leaves the details of the design of IPMP tools in the hands of applications developers, while ensuring the maximum flexibility and security. This paper first briefly describes the background of the development of the MPEG-4 IPMP Extension. It then presents an overview of the MPEG-4 IPMP Extension, including its architecture, the flexible protection signaling, and the secure messaging framework for the communication between the terminal and the tools. Two sample usage scenarios are also provided to illustrate how an MPEG-4 IPMP Extension compliant system works.

  5. Global interoperability in the oceanographic sea surface temperature community

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Casey, K. S.; Vazquez, J.; Habermann, T.; Bingham, A.; Thompson, C. K.; Donlon, C. J.

    2010-12-01

    The Group for High Resolution Sea Surface Temperature (GHRSST) Project is an international consortium of data providers coordinated across four continents providing sea surface temperature (SST) products from nearly every SST observing satellite in common data and metadata formats since 2005. It currently provides Level 2P data for 13 unique sensors with over 40 combined Level 2, 3 and 4 products. The entire project produces on the order of 35 Gbytes/day and distributes over 3 Tbytes/ month from a variety of access nodes. Although these combined data throughputs are modest by the standards of future NASA Decadal missions, GHRSST has achieved a large measure of success by implementing a regional/global task sharing framework built on self describing data formats, standardized metadata content and data access protocols early in its mission. We will present some of these implementation strategies, lessons learned and history with regard to standardizing products while reducing barriers to interoperability that the project undertook leading up to the present. We will also discuss recent revisions of data and metadata product specifications, and new tools and services that the project will implement in the near future to further reduce barriers, and improve discovery, metadata and access.

  6. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  7. A Proposed Information Architecture for Telehealth System Interoperability

    SciTech Connect

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.; Garica, R.J.; Parks, R.C.; Warren, S.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.

  8. A semantically-aided architecture for a web-based monitoring system for carotid atherosclerosis.

    PubMed

    Kolias, Vassileios D; Stamou, Giorgos; Golemati, Spyretta; Stoitsis, Giannis; Gkekas, Christos D; Liapis, Christos D; Nikita, Konstantina S

    2015-08-01

    Carotid atherosclerosis is a multifactorial disease and its clinical diagnosis depends on the evaluation of heterogeneous clinical data, such as imaging exams, biochemical tests and the patient's clinical history. The lack of interoperability between Health Information Systems (HIS) does not allow the physicians to acquire all the necessary data for the diagnostic process. In this paper, a semantically-aided architecture is proposed for a web-based monitoring system for carotid atherosclerosis that is able to gather and unify heterogeneous data with the use of an ontology and to create a common interface for data access enhancing the interoperability of HIS. The architecture is based on an application ontology of carotid atherosclerosis that is used to (a) integrate heterogeneous data sources on the basis of semantic representation and ontological reasoning and (b) access the critical information using SPARQL query rewriting and ontology-based data access services. The architecture was tested over a carotid atherosclerosis dataset consisting of the imaging exams and the clinical profile of 233 patients, using a set of complex queries, constructed by the physicians. The proposed architecture was evaluated with respect to the complexity of the queries that the physicians could make and the retrieval speed. The proposed architecture gave promising results in terms of interoperability, data integration of heterogeneous sources with an ontological way and expanded capabilities of query and retrieval in HIS. PMID:26736524

  9. ISAIA: Interoperable Systems for Archival Information Access

    NASA Technical Reports Server (NTRS)

    Hanisch, Robert J.

    2002-01-01

    The ISAIA project was originally proposed in 1999 as a successor to the informal AstroBrowse project. AstroBrowse, which provided a data location service for astronomical archives and catalogs, was a first step toward data system integration and interoperability. The goals of ISAIA were ambitious: '...To develop an interdisciplinary data location and integration service for space science. Building upon existing data services and communications protocols, this service will allow users to transparently query hundreds or thousands of WWW-based resources (catalogs, data, computational resources, bibliographic references, etc.) from a single interface. The service will collect responses from various resources and integrate them in a seamless fashion for display and manipulation by the user.' Funding was approved only for a one-year pilot study, a decision that in retrospect was wise given the rapid changes in information technology in the past few years and the emergence of the Virtual Observatory initiatives in the US and worldwide. Indeed, the ISAIA pilot study was influential in shaping the science goals, system design, metadata standards, and technology choices for the virtual observatory. The ISAIA pilot project also helped to cement working relationships among the NASA data centers, US ground-based observatories, and international data centers. The ISAIA project was formed as a collaborative effort between thirteen institutions that provided data to astronomers, space physicists, and planetary scientists. Among the fruits we ultimately hoped would come from this project would be a central site on the Web that any space scientist could use to efficiently locate existing data relevant to a particular scientific question. Furthermore, we hoped that the needed technology would be general enough to allow smaller, more-focused community within space science could use the same technologies and standards to provide more specialized services. A major challenge to searching

  10. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  11. Semantator: Annotating Clinical Narratives with Semantic Web Ontologies

    PubMed Central

    Song, Dezhao; Chute, Christopher G.; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  12. Ensuring Sustainable Data Interoperability Across the Natural and Social Sciences

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Chen, R. S.

    2015-12-01

    Both the natural and social science data communities are attempting to address the long-term sustainability of their data infrastructures in rapidly changing research, technological, and policy environments. Many parts of these communities are also considering how to improve the interoperability and integration of their data and systems across natural, social, health, and other domains. However, these efforts have generally been undertaken in parallel, with little thought about how different sustainability approaches may impact long-term interoperability from scientific, legal, or economic perspectives, or vice versa, i.e., how improved interoperability could enhance—or threaten—infrastructure sustainability. Scientific progress depends substantially on the ability to learn from the legacy of previous work available for current and future scientists to study, often by integrating disparate data not previously assembled. Digital data are less likely than scientific publications to be usable in the future unless they are managed by science-oriented repositories that can support long-term data access with the documentation and services needed for future interoperability. We summarize recent discussions in the social and natural science communities on emerging approaches to sustainability and relevant interoperability activities, including efforts by the Belmont Forum E-Infrastructures project to address global change data infrastructure needs; the Group on Earth Observations to further implement data sharing and improve data management across diverse societal benefit areas; and the Research Data Alliance to develop legal interoperability principles and guidelines and to address challenges faced by domain repositories. We also examine emerging needs for data interoperability in the context of the post-2015 development agenda and the expected set of Sustainable Development Goals (SDGs), which set ambitious targets for sustainable development, poverty reduction, and

  13. Planetary Sciences Interoperability at VO Paris Data Centre

    NASA Astrophysics Data System (ADS)

    Le Sidaner, P.; Aboudarham, J.; Birlan, M.; Briot, D.; Bonnin, X.; Cecconi, B.; Chauvin, C.; Erard, S.; Henry, F.; Lamy, L.; Mancini, M.; Normand, J.; Popescu, F.; Roques, F.; Savalle, R.; Schneider, J.; Shih, A.; Thuillot, W.; Vinatier, S.

    2015-10-01

    The Astronomy community has been developing interoperability since more than 10 years, by standardizing data access, data formats, and metadata. This international action is led by the International Virtual Observatory Alliance (IVOA). Observatoire de Paris is an active participant in this project. All actions on interoperability, data and service provision are centralized in and managed by VOParis Data Centre (VOPDC). VOPDC is a coordinated project from all scientific departments of Observatoire de Paris..

  14. The HDF Product Designer - Interoperability in the First Mile

    NASA Astrophysics Data System (ADS)

    Lee, H.; Jelenak, A.; Habermann, T.

    2014-12-01

    Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.

  15. Universal Semantics in Translation

    ERIC Educational Resources Information Center

    Wang, Zhenying

    2009-01-01

    What and how we translate are questions often argued about. No matter what kind of answers one may give, priority in translation should be granted to meaning, especially those meanings that exist in all concerned languages. In this paper the author defines them as universal sememes, and the study of them as universal semantics, of which…

  16. Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Dumais, Susan T.

    2004-01-01

    Presents a literature review that covers the following topics related to Latent Semantic Analysis (LSA): (1) LSA overview; (2) applications of LSA, including information retrieval (IR), information filtering, cross-language retrieval, and other IR-related LSA applications; (3) modeling human memory, including the relationship of LSA to other…

  17. Learning Semantic Query Suggestions

    NASA Astrophysics Data System (ADS)

    Meij, Edgar; Bron, Marc; Hollink, Laura; Huurnink, Bouke; de Rijke, Maarten

    An important application of semantic web technology is recognizing human-defined concepts in text. Query transformation is a strategy often used in search engines to derive queries that are able to return more useful search results than the original query and most popular search engines provide facilities that let users complete, specify, or reformulate their queries. We study the problem of semantic query suggestion, a special type of query transformation based on identifying semantic concepts contained in user queries. We use a feature-based approach in conjunction with supervised machine learning, augmenting term-based features with search history-based and concept-specific features. We apply our method to the task of linking queries from real-world query logs (the transaction logs of the Netherlands Institute for Sound and Vision) to the DBpedia knowledge base. We evaluate the utility of different machine learning algorithms, features, and feature types in identifying semantic concepts using a manually developed test bed and show significant improvements over an already high baseline. The resources developed for this paper, i.e., queries, human assessments, and extracted features, are available for download.

  18. Environmental Attitudes Semantic Differential.

    ERIC Educational Resources Information Center

    Mehne, Paul R.; Goulard, Cary J.

    This booklet is an evaluation instrument which utilizes semantic differential data to assess environmental attitudes. Twelve concepts are included: regulated access to beaches, urban planning, dune vegetation, wetlands, future cities, reclaiming wetlands for building development, city parks, commercial development of beaches, existing cities,…

  19. Semantic Space Analyst

    2004-04-15

    The Semantic Space Analyst (SSA) is software for analyzing a text corpus, discovering relationships among terms, and allowing the user to explore that information in different ways. It includes features for displaying and laying out terms and relationships visually, for generating such maps from manual queries, for discovering differences between corpora. Data can also be exported to Microsoft Excel.

  20. The advanced microgrid. Integration and interoperability

    SciTech Connect

    Bower, Ward Isaac; Ton, Dan T.; Guttromson, Ross; Glover, Steven F; Stamp, Jason Edwin; Bhatnagar, Dhruv; Reilly, Jim

    2014-02-01

    This white paper focuses on "advanced microgrids," but sections do, out of necessity, reference today's commercially available systems and installations in order to clearly distinguish the differences and advances. Advanced microgrids have been identified as being a necessary part of the modern electrical grid through a two DOE microgrid workshops, the National Institute of Standards and Technology, Smart Grid Interoperability Panel and other related sources. With their grid-interconnectivity advantages, advanced microgrids will improve system energy efficiency and reliability and provide enabling technologies for grid-independence to end-user sites. One popular definition that has been evolved and is used in multiple references is that a microgrid is a group of interconnected loads and distributed-energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode. Further, an advanced microgrid can then be loosely defined as a dynamic microgrid.

  1. Recent ARC developments: Through modularity to interoperability

    NASA Astrophysics Data System (ADS)

    Smirnova, O.; Cameron, D.; Dóbé, P.; Ellert, M.; Frågåt, T.; Grønager, M.; Johansson, D.; Jönemo, J.; Kleist, J.; Kočan, M.; Konstantinov, A.; Kónya, B.; Márton, I.; Möller, S.; Mohn, B.; Nagy, Zs; Nilsen, J. K.; Ould Saada, F.; Qiang, W.; Read, A.; Rosendahl, P.; Roczei, G.; Savko, M.; Skou Andersen, M.; Stefán, P.; Szalai, F.; Taga, A.; Toor, S. Z.; Wäänänen, A.

    2010-04-01

    The Advanced Resource Connector (ARC) middleware introduced by NorduGrid is one of the basic Grid solutions used by scientists worldwide. While being well-proven in daily use by a wide variety of scientific applications at large-scale infrastructures like the Nordic DataGrid Facility (NDGF) and smaller scale projects, production ARC of today is still largely based on conventional Grid technologies and custom interfaces introduced a decade ago. In order to guarantee sustainability, true cross-system portability and standards-compliance based interoperability, the ARC community undertakes a massive effort of implementing modular Web Service (WS) approach into the middleware. With support from the EU KnowARC project, new components were introduced and the existing key ARC services got extended with WS technology based standard-compliant interfaces following a service-oriented architecture. Such components include the hosting environment framework, the resource-coupled execution service, the re-engineered client library, the self-healing storage solution and the peer-to-peer information system, to name a few. Gradual introduction of these new services and client tools into the production middleware releases is carried out together with NDGF and thus ensures a smooth transition to the next generation Grid middleware. Standard interfaces and modularity of the new component design are essential for ARC contributions to the planned Universal Middleware Distribution of the European Grid Initiative.

  2. Interoperable Data Access Services for NOAA IOOS

    NASA Astrophysics Data System (ADS)

    de La Beaujardiere, J.

    2008-12-01

    The Integrated Ocean Observing System (IOOS) is intended to enhance our ability to collect, deliver, and use ocean information. The goal is to support research and decision-making by providing data on our open oceans, coastal waters, and Great Lakes in the formats, rates, and scales required by scientists, managers, businesses, governments, and the public. The US National Oceanic and Atmospheric Administration (NOAA) is the lead agency for IOOS. NOAA's IOOS office supports the development of regional coastal observing capability and promotes data management efforts to increase data accessibility. Geospatial web services have been established at NOAA data providers including the National Data Buoy Center (NDBC), the Center for Operational Oceanographic Products and Services (CO-OPS), and CoastWatch, and at regional data provider sites. Services established include Open-source Project for a Network Data Access Protocol (OpenDAP), Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), and OGC Web Coverage Service (WCS). These services provide integrated access to data holdings that have been aggregated at each center from multiple sources. We wish to collaborate with other groups to improve our service offerings to maximize interoperability and enhance cross-provider data integration, and to share common service components such as registries, catalogs, data conversion, and gateways. This paper will discuss the current status of NOAA's IOOS efforts and possible next steps.

  3. Interoperable Data Sharing for Diverse Scientific Disciplines

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean

    2016-04-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.

  4. Semantic Shot Classification in Sports Video

    NASA Astrophysics Data System (ADS)

    Duan, Ling-Yu; Xu, Min; Tian, Qi

    2003-01-01

    In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.

  5. Semantic Web meets Integrative Biology: a survey.

    PubMed

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB. PMID:22492191

  6. Key pillars of data interoperability in Earth Sciences - INSPIRE and beyond

    NASA Astrophysics Data System (ADS)

    Tomas, Robert; Lutz, Michael

    2013-04-01

    The well-known heterogeneity and fragmentation of data models, formats and controlled vocabularies of environmental data limit potential data users from utilising the wealth of environmental information available today across Europe. The main aim of INSPIRE1 is to improve this situation and give users possibility to access, use and correctly interpret environmental data. Over the past years number of INSPIRE technical guidelines (TG) and implementing rules (IR) for interoperability have been developed, involving hundreds of domain experts from across Europe. The data interoperability specifications, which have been developed for all 34 INSPIRE spatial data themes2, are the central component of the TG and IR. Several of these themes are related to the earth sciences, e.g. geology (including hydrogeology, geophysics and geomorphology), mineral and energy resources, soil science, natural hazards, meteorology, oceanography, hydrology and land cover. The following main pillars for data interoperability and harmonisation have been identified during the development of the specifications: Conceptual data models describe the spatial objects and their properties and relationships for the different spatial data themes. To achieve cross-domain harmonization, the data models for all themes are based on a common modelling framework (the INSPIRE Generic Conceptual Model3) and managed in a common UML repository. Harmonised vocabularies (or code lists) are to be used in data exchange in order to overcome interoperability issues caused by heterogeneous free-text and/or multi-lingual content. Since a mapping to a harmonized vocabulary could be difficult, the INSPIRE data models typically allow the provision of more specific terms from local vocabularies in addition to the harmonized terms - utilizing either the extensibility options or additional terminological attributes. Encoding. Currently, specific XML profiles of the Geography Markup Language (GML) are promoted as the standard

  7. Context-Aware Adaptive Hybrid Semantic Relatedness in Biomedical Science

    NASA Astrophysics Data System (ADS)

    Emadzadeh, Ehsan

    Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1--6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.

  8. Moving Controlled Vocabularies into the Semantic Web

    NASA Astrophysics Data System (ADS)

    Thomas, R.; Lowry, R. K.; Kokkinaki, A.

    2015-12-01

    . Having placed Linked Data tooling over a single SPARQL end point the obvious future development for this system is to support semantic interoperability outside NVS by the incorporation of federated SPARQL end points in the USA and Australia during the ODIP II project. 1https://vocab.nerc.ac.uk/sparql 2 https://www.bodc.ac.uk/data/codes_and_formats/vocabulary_search/

  9. Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences

    NASA Astrophysics Data System (ADS)

    Smith, P., II

    2015-12-01

    Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to Earth

  10. A core observational data model for enhancing the interoperability of ontologically annotated environmental data

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.; Bermudez, L. E.; Bowers, S.; Dibner, P. C.; Gries, C.; Jones, M. B.; McGuinness, D. L.; Cao, H.; Cox, S. J.; Kelling, S.; Lagoze, C.; Lapp, H.; Madin, J.

    2010-12-01

    Research in the environmental sciences often requires accessing diverse data, collected by numerous data providers over varying spatiotemporal scales, incorporating specialized measurements from a range of instruments. These measurements are typically documented using idiosyncratic, disciplinary specific terms, and stored in management systems ranging from desktop spreadsheets to the Cloud, where the information is often further decomposed or stylized in unpredictable ways. This situation creates major informatics challenges for broadly discovering, interpreting, and merging the data necessary for integrative earth science research. A number of scientific disciplines have recognized these issues, and been developing semantically enhanced data storage frameworks, typically based on ontologies, to enable communities to better circumscribe and clarify the content of data objects within their domain of practice. There is concern, however, that cross-domain compatibility of these semantic solutions could become problematic. We describe here our efforts to address this issue by developing a core, unified Observational Data Model, that should greatly facilitate interoperability among the semantic solutions growing organically within diverse scientific domains. Observational Data Models have emerged independently from several distinct scientific communities, including the biodiversity sciences, ecology, evolution, geospatial sciences, and hydrology, to name a few. Informatics projects striving for data integration within each of these domains had converged on identifying "observations" and "measurements" as fundamental abstractions that provide useful "templates" through which scientific data can be linked— at the structural, composited, or even cell value levels— to domain terms stored in ontologies or other forms of controlled vocabularies. The Scientific Observations Network, SONet (http://sonet.ecoinformatics.org) brings together a number of these observational

  11. GEO Standard and Interoperability Forum (SIF) European Team

    NASA Astrophysics Data System (ADS)

    Nativi, Stefano

    2010-05-01

    The European GEO SIF has been initiated by the GIGAS project in an effort to better coordinate European requirements for GEO and GEOSS related activities, and is recognised by GEO as a regional SIF. To help advance the interoperability goals of the Global Earth Observing System of Systems (GEOSS), the Group on Earth Observations (GEO) Architecture and Data Committee (ADC) has established a Standards and Interoperability Forum (SIF) to support GEO organizations offering components and services to GEOSS. The SIF will help GEOSS contributors understand how to work with the GEOSS interoperability guidelines and how to enter their "interoperability arrangements" (standards or other ad hoc arrangements for interoperability) into the GEOSS registries. This will greatly facilitate the utility of GEOSS and encourage significant increase in participation. To carry out its work most effectively, the SIF promotes to form Regional Teams. They will help to organize and optimize the support coming from the different parts of the World and reach out regional and multi-disciplinary Scientific Communities. This will allow to have true global representation in supporting GEOSS interoperability. A SIF European Team is foreseen. The main role of the SIF is facilitating interoperability and working with members and participating organizations as they offer data and information services to the users of GEOSS. In this framework, the purpose of having a European Regional Team is to increase efficiency in carrying out the work of the SIF. Experts can join the SIF European Team by registering at the SIF European Team wiki site: http://www.thegigasforum.eu/sif/

  12. Optimal Care Mother-Baby and Outcomes through Community-wide Data Sharing, Interoperability and Connectivity.

    PubMed

    Shaha, Steven H; Gilbert-Bradley, Diane

    2015-01-01

    The power of interoperable systems with data/information integration, central to achieving the goals of Telehealth, is illustrated through mutually beneficial sharing between Labor & Delivery (L&D) and Obstetrics (OBs) Clinics. Data shared between L&D and OB brought improved practice patterns and outcomes, and increased satisfaction at both. Staffing and skillsets were significantly improved by knowing complications arriving and anticipated volumes. OBs increased clinic efficiencies and improved patient-direct care time with improved clinical and cost outcomes. PMID:25980718

  13. From Data to Semantic Information

    NASA Astrophysics Data System (ADS)

    Floridi, Luciano

    2003-06-01

    There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates the important implications of the revised definition for the analysis of the deflationary theories of truth, the standard definition of knowledge and the classic, quantitative theory of semantic information.

  14. Advances in Multi-disciplinary Interoperability

    NASA Astrophysics Data System (ADS)

    Pearlman, J.; Nativi, S.; Craglia, M.; Huerta, J.; Rubio-Iglesias, J. M.; Serrano, J. J.

    2012-04-01

    The challenge for addressing issues such as climate change, food security or ecosystem sustainability is that they require multi-disciplinary collaboration and the ability to integrate information across scientific domains. Multidisciplinary collaborations are difficult because each discipline has its own "language", protocols and formats for communicating within its community and handling data and information. EuroGEOSS demonstrates the added value to the scientific community and to society of making existing systems and applications interoperable and useful within the GEOSS and INSPIRE frameworks. In 2010, the project built an initial operating capacity of a multi-disciplinary Information System addressing three areas: drought, forestry and biodiversity. It is now furthering this development into an advanced operating capacity (http://www.eurogeoss.eu). The key to this capability is the creation of a broker that supports access to multiple resources through a common user interface and the automation of data search and access using state of the art information technology. EuroGEOSS hosted a conference on information systems and multi-disciplinary applications of science and technology. "EuroGEOSS: advancing the vision of GEOSS" provided a forum for developers, users and decision-makers working with advanced multi-disciplinary information systems to improve science and decisions for complex societal issues. In particular, the Conference addressed: Information systems for supporting multi-disciplinary research; Information systems and modeling for biodiversity, drought, forestry and related societal benefit areas; and Case studies of multi-disciplinary applications and outcomes. This paper will discuss the major finding of the conference and the directions for future development.

  15. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and

  16. A Conceptual Framework to Enhance the Interoperability of Observatories among Countries, Continents and the World

    NASA Astrophysics Data System (ADS)

    Loescher, H.; Fundamental Instrument Unit

    2013-05-01

    , GEO-BON, NutNet, etc.) and domestically, (e.g., NSF-CZO, USDA-LTAR, DOE-NGEE, Soil Carbon Network, etc.), there is a strong and mutual desire to assure interoperability of data. Developing interoperability is the degree by which each of the following is mapped between observatories (entities), defined by linking i) science requirements with science questions, ii) traceability of measurements to nationally and internationally accepted standards, iii) how data product are derived, i.e., algorithms, procedures, and methods, and iv) the bioinformatics which broadly include data formats, metadata, controlled vocabularies, and semantics. Here, we explore the rationale and focus areas for interoperability, the governance and work structures, example projects (NSF-NEON, EU-ICOS, and AU-TERN), and the emergent roles of scientists in these endeavors.

  17. Improving Volunteered Geographic Data Quality Using Semantic Similarity Measurements

    NASA Astrophysics Data System (ADS)

    Vandecasteele, A.; Devillers, R.

    2013-05-01

    Studies have analysed the quality of volunteered geographic information (VGI) datasets, assessing the positional accuracy of features and the completeness of specific attributes. While it has been shown that VGI can, in some context, reach a high positional accuracy, these works have also highlighted a large spatial heterogeneity in positional accuracy, completeness but also with regards to the semantics of the objects. Such high semantic heterogeneity of VGI datasets becomes a significant obstacle to a number of possible uses that could be made of the data. This paper proposes an approach for both improving the semantic quality and reducing the semantic heterogeneity of VGI dat asets. The improvement of the semantic quality is achieved by automatically suggesting attributes to contributors during the editing process. The reduction of semantic heterogeneity is achieved by automatically notifying contributors when two attributes are too similar or too dissimilar. The approach was implemented into a plugin for OpenStreetMap and different examples illustrate how this plugin can be used to improve the quality of VGI data.

  18. EXACT2: the semantics of biomedical protocols

    PubMed Central

    2014-01-01

    Background The reliability and reproducibility of experimental procedures is a cornerstone of scientific practice. There is a pressing technological need for the better representation of biomedical protocols to enable other agents (human or machine) to better reproduce results. A framework that ensures that all information required for the replication of experimental protocols is essential to achieve reproducibility. Methods We have developed the ontology EXACT2 (EXperimental ACTions) that is designed to capture the full semantics of biomedical protocols required for their reproducibility. To construct EXACT2 we manually inspected hundreds of published and commercial biomedical protocols from several areas of biomedicine. After establishing a clear pattern for extracting the required information we utilized text-mining tools to translate the protocols into a machine amenable format. We have verified the utility of EXACT2 through the successful processing of previously 'unseen' (not used for the construction of EXACT2) protocols. Results The paper reports on a fundamentally new version EXACT2 that supports the semantically-defined representation of biomedical protocols. The ability of EXACT2 to capture the semantics of biomedical procedures was verified through a text mining use case. In this EXACT2 is used as a reference model for text mining tools to identify terms pertinent to experimental actions, and their properties, in biomedical protocols expressed in natural language. An EXACT2-based framework for the translation of biomedical protocols to a machine amenable format is proposed. Conclusions The EXACT2 ontology is sufficient to record, in a machine processable form, the essential information about biomedical protocols. EXACT2 defines explicit semantics of experimental actions, and can be used by various computer applications. It can serve as a reference model for for the translation of biomedical protocols in natural language into a semantically

  19. Semantic Web integration of Cheminformatics resources with the SADI framework

    PubMed Central

    2011-01-01

    Background The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS) may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web. Results We have created a prototype framework of Semantic Automated Discovery and Integration (SADI) framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource envelopment as SADI services through

  20. Living With Semantic Dementia

    PubMed Central

    Sage, Karen; Wilkinson, Ray; Keady, John

    2014-01-01

    Semantic dementia is a variant of frontotemporal dementia and is a recently recognized diagnostic condition. There has been some research quantitatively examining care partner stress and burden in frontotemporal dementia. There are, however, few studies exploring the subjective experiences of family members caring for those with frontotemporal dementia. Increased knowledge of such experiences would allow service providers to tailor intervention, support, and information better. We used a case study design, with thematic narrative analysis applied to interview data, to describe the experiences of a wife and son caring for a husband/father with semantic dementia. Using this approach, we identified four themes: (a) living with routines, (b) policing and protecting, (c) making connections, and (d) being adaptive and flexible. Each of these themes were shared and extended, with the importance of routines in everyday life highlighted. The implications for policy, practice, and research are discussed. PMID:24532121

  1. Semantic interpretation of nominalizations

    SciTech Connect

    Hull, R.D.; Gomez, F.

    1996-12-31

    A computational approach to the semantic interpretation of nominalizations is described. Interpretation of normalizations involves three tasks: deciding whether the normalization is being used in a verbal or non-verbal sense; disambiguating the normalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be recognized by the presence of modifiers that represent the arguments of the verbal concept. It is these same modifiers which provide the semantic clues to disambiguate the normalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non-verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These methods are tested in the domain of encyclopedic texts and the results are shown.

  2. Practical Semantic Astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Gray, N.; Burke, D.

    2010-01-01

    Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.

  3. caBIG™ Compatibility Review System: Software to Support the Evaluation of Applications Using Defined Interoperability Criteria

    PubMed Central

    Freimuth, Robert R.; Schauer, Michael W.; Lodha, Preeti; Govindrao, Poornima; Nagarajan, Rakesh; Chute, Christopher G.

    2008-01-01

    The caBIG™ Compatibility Review System (CRS) is a web-based application to support compatibility reviews, which certify that software applications that pass the review meet a specific set of criteria that allow them to interoperate. The CRS contains workflows that support both semantic and syntactic reviews, which are performed by the caBIG Vocabularies and Common Data Elements (VCDE) and Architecture workspaces, respectively. The CRS increases the efficiency of compatibility reviews by reducing administrative overhead and it improves uniformity by ensuring that each review is conducted according to a standard process. The CRS provides metrics that allow the review team to evaluate the level of data element reuse in an application, a first step towards quantifying the extent of harmonization between applications. Finally, functionality is being added that will provide automated validation of checklist criteria, which will further simplify the review process. PMID:18999296

  4. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    PubMed

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system. PMID:26262183

  5. Personal Health Records: Is Rapid Adoption Hindering Interoperability?

    PubMed Central

    Studeny, Jana; Coustasse, Alberto

    2014-01-01

    The establishment of the Meaningful Use criteria has created a critical need for robust interoperability of health records. A universal definition of a personal health record (PHR) has not been agreed upon. Standardized code sets have been built for specific entities, but integration between them has not been supported. The purpose of this research study was to explore the hindrance and promotion of interoperability standards in relationship to PHRs to describe interoperability progress in this area. The study was conducted following the basic principles of a systematic review, with 61 articles used in the study. Lagging interoperability has stemmed from slow adoption by patients, creation of disparate systems due to rapid development to meet requirements for the Meaningful Use stages, and rapid early development of PHRs prior to the mandate for integration among multiple systems. Findings of this study suggest that deadlines for implementation to capture Meaningful Use incentive payments are supporting the creation of PHR data silos, thereby hindering the goal of high-level interoperability. PMID:25214822

  6. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial

  7. OMOGENIA: A Semantically Driven Collaborative Environment

    NASA Astrophysics Data System (ADS)

    Liapis, Aggelos

    Ontology creation can be thought of as a social procedure. Indeed the concepts involved in general need to be elicited from communities of domain experts and end-users by teams of knowledge engineers. Many problems in ontology creation appear to resemble certain problems in software design, particularly with respect to the setup of collaborative systems. For instance, the resolution of conceptual conflicts between formalized ontologies is a major engineering problem as ontologies move into widespread use on the semantic web. Such conflict resolution often requires human collaboration and cannot be achieved by automated methods with the exception of simple cases. In this chapter we discuss research in the field of computer-supported cooperative work (CSCW) that focuses on classification and which throws light on ontology building. Furthermore, we present a semantically driven collaborative environment called OMOGENIA as a natural way to display and examine the structure of an evolving ontology in a collaborative setting.

  8. Mashup of Tools through Interoperability Standards RSS, RDF, KML and XSL

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Kieffer, M.; Kovacs, S.; Falke, S. R.; Husar, R. B.

    2007-12-01

    Considerable effort is being devoted to the development, and testing of interoperability standards for data access, such as the OGC Web Services WMS/WCS/WFS. The federated data system, DataFed, developed as a broad partnership utilizes these data access standards to deliver a rich array of quantitative air quality and geospatial data to any client application. Recent advances in standardization also allow the linking of distributed applications through the exchange of additional data types: (1)The Really Simple Syndication (RSS) standard facilitates the exchange of simple text records (e.g. annotated bookmarks); (2) The Resource Description Framework (RDF) standard allows the formal description of semantically rich data structures; (3) The Google Keyhole Markup Language (KML) is the standard way to encode/render geospatial data and metadata; (4) The EMBED standard facilitates the incorporation of web content from one server into the web page hosted on another server. Use of these standards now permits the easy creation of user-defined application "mashups" that transform/integrate various data/metadata streams. Wikis, originally used to collaboratively write documents, are now also used as a host for integrating mashups of this type. In this paper, we will illustrate these application mashups. We will show an example where the wiki host receives RSS feeds from Del.icio.us, blogs, etc. A semantically enhanced wiki is used to create and manage structured metadata, which then can be shared through the RDF standard feed. Content from Google Maps, videos from YouTube, PPT slides from SlideShare are also integrated into wiki pages through the EMBED standard. Mashups between the DataFed data access system, the wiki and GoogleEarth using KML, XSL and RDF will also be demonstrated.

  9. Metaworkflows and Workflow Interoperability for Heliophysics

    NASA Astrophysics Data System (ADS)

    Pierantoni, Gabriele; Carley, Eoin P.

    2014-06-01

    Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They

  10. Catalog Federation and Interoperability for Geoinformatics

    NASA Astrophysics Data System (ADS)

    Memon, A.; Lin, K.; Baru, C.

    2008-12-01

    With the increasing proliferation of online resources in the geosciences, including data, tools, and software services, there is also a proliferation of catalogs containing metadata that describe these resources. To realize the vision articulated in the NSF Workshop on Building a National Geoinformatics System, March 2007-where a user can sit at a terminal and easily search, discover, integrate and use distributed geoscience resources-it will be essential that a search request be able to traverse these multiple metadata catalogs. In this paper, we describe our effort at prototyping catalog interoperability across multiple metadata catalogs. An example of a metadata catalog is the one employed in the GEON Project (www.geongrid.org). The central GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, therefore, can be imbedded in any software application. There has been a requirement from some of the GEON collaborators (for example, at the University of Hyderabad, India and the Navajo Technical College, New Mexico) to deploy their own catalogs, to store information about their resources locally, while they publish some of this information for broader access and use. Thus, a search must now be able to span multiple, independent GEON catalogs. Next, some of our collaborators-e.g. GEO Grid (Global Earth Observations Grid) in Japan-are implementing the Catalog Services for the Web (CS-W) standard for their catalog, thereby requiring the search to span across catalogs implemented using the CS-W standard as well. Finally, we have recently deployed a search service to access all EarthScope data products, which are distributed across organizations in Seattle, WA (IRIS), Boulder, CO (UNAVCO), and Potsdam, Germany (ICDP/GFZ). This service essentially implements a virtual catalog (the actual catalogs and data are stored at the remote locations). So, there is the need to incorporate such 3rd

  11. Operational Interoperability Challenges on the Example of GEOSS and WIS

    NASA Astrophysics Data System (ADS)

    Heene, M.; Buesselberg, T.; Schroeder, D.; Brotzer, A.; Nativi, S.

    2015-12-01

    The following poster highlights the operational interoperability challenges on the example of Global Earth Observation System of Systems (GEOSS) and World Meteorological Organization Information System (WIS). At the heart of both systems is a catalogue of earth observation data, products and services but with different metadata management concepts. While in WIS a strong governance with an own metadata profile for the hundreds of thousands metadata records exists, GEOSS adopted a more open approach for the ten million records. Furthermore, the development of WIS - as an operational system - follows a roadmap with committed downwards compatibility while the GEOSS development process is more agile. The poster discusses how the interoperability can be reached for the different metadata management concepts and how a proxy concept helps to couple two different systems which follow a different development methodology. Furthermore, the poster highlights the importance of monitoring and backup concepts as a verification method for operational interoperability.

  12. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  13. Data Access, Discovery and Interoperability in the European Context

    NASA Astrophysics Data System (ADS)

    Genova, Francoise

    2015-12-01

    European Virtual Observatory (VO) activities have been coordinated by a series of projects funded by the European Commission. Three pillar were identified: support to the data providers for implementation of their data in the VO framework; support to the astronomical community for their usage of VO-enabled data and tools; technological work for updating the VO framework of interoperability standards and tools. A new phase is beginning with the ASTERICS cluster project. ASTERICS Work Package "Data Access, Discovery and Interoperability" aims at making the data from the ESFRI projects and their pathfinders available for discovery and usage, interoperable in the VO framework and accessible with VO-enabled common tools. VO teams and representatives of ESFRI and pathfinder projects and of EGO/VIRGO are engaged together in the Work Package. ESO is associated to the project which is also working closely with ESA. The three pillars identified for coordinating Europaen VO activities are tackled.

  14. Plugfest 2009: Global Interoperability in Telerobotics and Telemedicine

    PubMed Central

    King, H. Hawkeye; Hannaford, Blake; Kwok, Ka-Wai; Yang, Guang-Zhong; Griffiths, Paul; Okamura, Allison; Farkhatdinov, Ildar; Ryu, Jee-Hwan; Sankaranarayanan, Ganesh; Arikatla, Venkata; Tadano, Kotaro; Kawashima, Kenji; Peer, Angelika; Schauß, Thomas; Buss, Martin; Miller, Levi; Glozman, Daniel; Rosen, Jacob; Low, Thomas

    2014-01-01

    Despite the great diversity of teleoperator designs and applications, their underlying control systems have many similarities. These similarities can be exploited to enable inter-operability between heterogeneous systems. We have developed a network data specification, the Interoperable Telerobotics Protocol, that can be used for Internet based control of a wide range of teleoperators. In this work we test interoperable telerobotics on the global Internet, focusing on the telesurgery application domain. Fourteen globally dispersed telerobotic master and slave systems were connected in thirty trials in one twenty four hour period. Users performed common manipulation tasks to demonstrate effective master-slave operation. With twenty eight (93%) successful, unique connections the results show a high potential for standardizing telerobotic operation. Furthermore, new paradigms for telesurgical operation and training are presented, including a networked surgery trainer and upper-limb exoskeleton control of micro-manipulators. PMID:24748993

  15. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  16. Semantic Retrieval Based on Feature Element Constructional Model and Bias Competition Mechanism

    NASA Astrophysics Data System (ADS)

    Xu, Yin; Zhang, Yujin

    2003-01-01

    An original image retrieval framework is proposed and developed. Trying to achieve the semantic retrieval, a novel cognitive model - feature element constructional model is proposed. With its hierachical constructional structure and bias competition mechanism, the new model provides great power for semantic retrieval. Two types of retrieval mode are presented in the new system, which both try to analysis the semantic concept in the query image or semantic command. Then matching from the object to the feature element is carried out to obtain the final result, and our understanding of retrieval "to provide the way of approaching the accurate result" is also embodied.

  17. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  18. Non-semantic contributions to "semantic" redundancy gain.

    PubMed

    Shepherdson, Peter; Miller, Jeff

    2016-08-01

    Recently, two groups of researchers have reported redundancy gains (enhanced performance with multiple, redundant targets) in tasks requiring semantic categorization. Here we report two experiments aimed at determining whether the gains found by one of these groups resulted from some form of semantic coactivation. We asked undergraduate psychology students to complete choice RT tasks requiring the semantic categorization of visually presented words, and compared performance with redundant targets from the same semantic category to performance with redundant targets from different semantic categories. If the redundancy gains resulted from the combination of information at a semantic level, they should have been greater in the former than the latter situation. However, our results showed no significant differences in redundancy gain (for latency and accuracy) between same-category and different-category conditions, despite gains appearing in both conditions. Thus, we suggest that redundancy gain in the semantic categorization task may result entirely from statistical facilitation or combination of information at non-semantic levels. PMID:26339718

  19. Interoperable Archetypes With a Three Folded Terminology Governance.

    PubMed

    Pederson, Rune; Ellingsen, Gunnar

    2015-01-01

    The use of openEHR archetypes increases the interoperability of clinical terminology, and in doing so improves upon the availability of clinical terminology for both primary and secondary purposes. Where clinical terminology is employed in the EPR system, research reports conflicting a results for the use of structuring and standardization as measurements of success. In order to elucidate this concept, this paper focuses on the effort to establish a national repository for openEHR based archetypes in Norway where clinical terminology could be included with benefit for interoperability three folded. PMID:26262236

  20. 76 FR 72922 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-28

    ... COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council... Communications Commission's (FCC) third Communications Security, Reliability, and Interoperability Council (CSRIC... FCC regarding best practices and actions the FCC can take to ensure the security, reliability,...

  1. 75 FR 66752 - Smart Grid Interoperability Standards; Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... Energy Regulatory Commission Smart Grid Interoperability Standards; Notice of Technical Conference... regulatory authorities that also are considering the adoption of Smart Grid Interoperability Standards.../FERC Collaborative on Smart Response (Collaborative), in the International D Ballroom at the Omni...

  2. 75 FR 417 - National Protection and Programs Directorate; Statewide Communication Interoperability Plan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... SECURITY National Protection and Programs Directorate; Statewide Communication Interoperability Plan...: Statewide Communication Interoperability Plan Implementation Report. Form: Not Applicable. OMB Number: 1670... Emergency Communications Grant Program (IECGP) (6 U.S.C. 579) comply with the Statewide...

  3. 75 FR 21011 - National Protection and Programs Directorate; Statewide Communication Interoperability Plan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-22

    ... SECURITY National Protection and Programs Directorate; Statewide Communication Interoperability Plan... concerning New Information Collection Request, Statewide Communication Interoperability Plan Implementation... January 5, 2010, at 75 FR 417, for a 60-day public comment period. DHS received no comments. The...

  4. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  5. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  6. Postmarketing Safety Study Tool: A Web Based, Dynamic, and Interoperable System for Postmarketing Drug Surveillance Studies

    PubMed Central

    Sinaci, A. Anil; Laleci Erturkmen, Gokce B.; Gonul, Suat; Yuksel, Mustafa; Invernizzi, Paolo; Thakrar, Bharat; Pacaci, Anil; Cinar, H. Alper; Cicekli, Nihan Kesim

    2015-01-01

    Postmarketing drug surveillance is a crucial aspect of the clinical research activities in pharmacovigilance and pharmacoepidemiology. Successful utilization of available Electronic Health Record (EHR) data can complement and strengthen postmarketing safety studies. In terms of the secondary use of EHRs, access and analysis of patient data across different domains are a critical factor; we address this data interoperability problem between EHR systems and clinical research systems in this paper. We demonstrate that this problem can be solved in an upper level with the use of common data elements in a standardized fashion so that clinical researchers can work with different EHR systems independently of the underlying information model. Postmarketing Safety Study Tool lets the clinical researchers extract data from different EHR systems by designing data collection set schemas through common data elements. The tool interacts with a semantic metadata registry through IHE data element exchange profile. Postmarketing Safety Study Tool and its supporting components have been implemented and deployed on the central data warehouse of the Lombardy region, Italy, which contains anonymized records of about 16 million patients with over 10-year longitudinal data on average. Clinical researchers in Roche validate the tool with real life use cases. PMID:26543873

  7. Postmarketing Safety Study Tool: A Web Based, Dynamic, and Interoperable System for Postmarketing Drug Surveillance Studies.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B; Gonul, Suat; Yuksel, Mustafa; Invernizzi, Paolo; Thakrar, Bharat; Pacaci, Anil; Cinar, H Alper; Cicekli, Nihan Kesim

    2015-01-01

    Postmarketing drug surveillance is a crucial aspect of the clinical research activities in pharmacovigilance and pharmacoepidemiology. Successful utilization of available Electronic Health Record (EHR) data can complement and strengthen postmarketing safety studies. In terms of the secondary use of EHRs, access and analysis of patient data across different domains are a critical factor; we address this data interoperability problem between EHR systems and clinical research systems in this paper. We demonstrate that this problem can be solved in an upper level with the use of common data elements in a standardized fashion so that clinical researchers can work with different EHR systems independently of the underlying information model. Postmarketing Safety Study Tool lets the clinical researchers extract data from different EHR systems by designing data collection set schemas through common data elements. The tool interacts with a semantic metadata registry through IHE data element exchange profile. Postmarketing Safety Study Tool and its supporting components have been implemented and deployed on the central data warehouse of the Lombardy region, Italy, which contains anonymized records of about 16 million patients with over 10-year longitudinal data on average. Clinical researchers in Roche validate the tool with real life use cases. PMID:26543873

  8. Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.

    PubMed

    Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin

    2012-01-01

    Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies. PMID:22874367

  9. Semantic Analysis in Machine Translation.

    ERIC Educational Resources Information Center

    Skorokhodko, E. F.

    1970-01-01

    In many cases machine-translation does not produce satisfactory results within the framework of purely formal (morphological and syntaxic) analysis, particularly, in the case of syntaxic and lexical homonomy. An algorithm for syntaxic-semantic analysis is proposed, and its principles of operation are described. The syntaxico-semantic structure is…

  10. Semantic Feature Distinctiveness and Frequency

    ERIC Educational Resources Information Center

    Lamb, Katherine M.

    2012-01-01

    Lexical access is the process in which basic components of meaning in language, the lexical entries (words) are activated. This activation is based on the organization and representational structure of the lexical entries. Semantic features of words, which are the prominent semantic characteristics of a word concept, provide important information…

  11. Semantic Tools in Information Retrieval.

    ERIC Educational Resources Information Center

    Rubinoff, Morris; Stone, Don C.

    This report discusses the problem of the meansings of words used in information retrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

  12. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  13. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  14. Semantic Sensor Observation Networks in a Billion-Sensor World

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Bogden, P.; Creager, G.; Graybeal, J.

    2008-12-01

    In 2010, there will be 10,000 telemetric devices for every human in the planet (prediction by Ernest and Young). Some of these devices will be collecting data from coastal phenomena. Some will be connected to adaptive sampling systems, which allow observing a phenomenon, forecasting its advance, and triggering of other numerical models, new missions or changes to the sampling frequency of other sensors. These highly sophisticated autonomous and adaptive sensors will help improve the understating of coastal phenomena; however, collaborative arrangements among communities need to happen to be able to interoperate in a world of billions of sensors. Arrangements will allow discovery and sharing of sensor descriptions and understanding and usage of observed data. OOSTethys is an open source collaborative project that helps implement ocean observing system components. Some of these components include sensor interfaces, catalogs of services, and semantic mediators. The OOSTethys team seeks to speed up collaborative arrangements by studying the best standards available, creating easy-to-adopt toolkits, and publishing guides that facilitate the implementation of these components. The interaction of some observing system components, and lessons learned about developing Semantic Sensor Networks using OGC Sensor Observation Services and ontologies, will be discussed.

  15. 76 FR 51271 - Implementing a Nationwide, Broadband, Interoperable Public Safety Network in the 700 MHz Band

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-18

    ... COMMISSION 47 CFR Part 90 Implementing a Nationwide, Broadband, Interoperable Public Safety Network in the... interoperable public safety broadband network. The establishment of a common air interface for 700 MHz public safety broadband networks will create a foundation for interoperability and provide a clear path for...

  16. EarthCube - Earth System Bridge: Spanning Scientific Communities with Interoperable Modeling Frameworks

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.

    2014-12-01

    In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a

  17. Conscious and unconscious detection of semantic anomalies.

    PubMed

    Hannon, Brenda

    2015-01-01

    When asked What superhero is associated with bats, Robin, the Penguin, Metropolis, Catwoman, the Riddler, the Joker, and Mr. Freeze? people frequently fail to notice the anomalous word Metropolis. The goals of this study were to determine whether detection of semantic anomalies, like Metropolis, is conscious or unconscious and whether this detection is immediate or delayed. To achieve these goals, participants answered anomalous and nonanomalous questions as their reading times for words were recorded. Comparisons between detected versus undetected anomalies revealed slower reading times for detected anomalies-a finding that suggests that people immediately and consciously detected anomalies. Further, comparisons between first and second words following undetected anomalies versus nonanomalous controls revealed some slower reading times for first and second words-a finding that suggests that people may have unconsciously detected anomalies but this detection was delayed. Taken together, these findings support the idea that when we are immediately aware of a semantic anomaly (i.e., immediate conscious detection) our language processes make immediate adjustments in order to reconcile contradictory information of anomalies with surrounding text; however, even when we are not consciously aware of semantic anomalies, our language processes still make these adjustments, although these adjustments are delayed (i.e., delayed unconscious detection). PMID:25624136

  18. Semantic Annotation for Biological Information Retrieval System

    PubMed Central

    Oshaiba, Mohamed Marouf Z.; El Houby, Enas M. F.; Salah, Akram

    2015-01-01

    Online literatures are increasing in a tremendous rate. Biological domain is one of the fast growing domains. Biological researchers face a problem finding what they are searching for effectively and efficiently. The aim of this research is to find documents that contain any combination of biological process and/or molecular function and/or cellular component. This research proposes a framework that helps researchers to retrieve meaningful documents related to their asserted terms based on gene ontology (GO). The system utilizes GO by semantically decomposing it into three subontologies (cellular component, biological process, and molecular function). Researcher has the flexibility to choose searching terms from any combination of the three subontologies. Document annotation is taking a place in this research to create an index of biological terms in documents to speed the searching process. Query expansion is used to infer semantically related terms to asserted terms. It increases the search meaningful results using the term synonyms and term relationships. The system uses a ranking method to order the retrieved documents based on the ranking weights. The proposed system achieves researchers' needs to find documents that fit the asserted terms semantically. PMID:25737720

  19. Towards sustainability: An interoperability outline for a Regional ARC based infrastructure in the WLCG and EGEE infrastructures

    NASA Astrophysics Data System (ADS)

    Field, L.; Gronager, M.; Johansson, D.; Kleist, J.

    2010-04-01

    Interoperability of grid infrastructures is becoming increasingly important in the emergence of large scale grid infrastructures based on national and regional initiatives. To achieve interoperability of grid infrastructures adaptions and bridging of many different systems and services needs to be tackled. A grid infrastructure offers services for authentication, authorization, accounting, monitoring, operation besides from the services for handling and data and computations. This paper presents an outline of the work done to integrate the Nordic Tier-1 and 2s, which for the compute part is based on the ARC middleware, into the WLCG grid infrastructure co-operated by the EGEE project. Especially, a throughout description of integration of the compute services is presented.

  20. The role of markup for enabling interoperability in health informatics.

    PubMed

    McKeever, Steve; Johnson, David

    2015-01-01

    Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realized on a variety of physiological and health care use cases. The last 15 years has seen the rise of very cheap digital storage both on and off site. With the advent of the Internet of Things people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to in silico modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realized. We conclude with a discussion on future possibilities that big data and further standardizations will enable. PMID:26042043

  1. The role of markup for enabling interoperability in health informatics

    PubMed Central

    McKeever, Steve; Johnson, David

    2015-01-01

    Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realized on a variety of physiological and health care use cases. The last 15 years has seen the rise of very cheap digital storage both on and off site. With the advent of the Internet of Things people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to in silico modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realized. We conclude with a discussion on future possibilities that big data and further standardizations will enable. PMID:26042043

  2. Interoperability Gap Challenges for Learning Object Repositories & Learning Management Systems

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2011-01-01

    An interoperability gap exists between Learning Management Systems (LMSs) and Learning Object Repositories (LORs). Learning Objects (LOs) and the associated Learning Object Metadata (LOM) that is stored within LORs adhere to a variety of LOM standards. A common LOM standard found in LORs is the Sharable Content Object Reference Model (SCORM)…

  3. 47 CFR 0.192 - Emergency Response Interoperability Center.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Safety and Homeland Security Bureau to develop, recommend, and administer policy goals, objectives... Organization Public Safety and Homeland Security Bureau § 0.192 Emergency Response Interoperability Center. (a... extent permitted by applicable law, the Chief of the Public Safety and Homeland Security Bureau...

  4. 47 CFR 0.192 - Emergency Response Interoperability Center.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Public Safety and Homeland Security Bureau to develop, recommend, and administer policy goals, objectives... Organization Public Safety and Homeland Security Bureau § 0.192 Emergency Response Interoperability Center. (a... extent permitted by applicable law, the Chief of the Public Safety and Homeland Security Bureau...

  5. Putting the School Interoperability Framework to the Test

    ERIC Educational Resources Information Center

    Mercurius, Neil; Burton, Glenn; Hopkins, Bill; Larsen, Hans

    2004-01-01

    The Jurupa Unified School District in Southern California recently partnered with Microsoft, Dell and the Zone Integration Group for the implementation of a School Interoperability Framework (SIF) database repository model throughout the district (Magner 2002). A two-week project--the Integrated District Education Applications System, better known…

  6. 75 FR 28206 - Establishment of an Emergency Response Interoperability Center

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... delegates authority to the Chief of the Public Safety and Homeland Security Bureau to establish advisory bodies and select appropriate representatives from federal agencies, the public safety community, and... the 700 MHz public safety broadband wireless network will be fully operable and interoperable on...

  7. Interoperability, Scaling, and the Digital Libraries Research Agenda.

    ERIC Educational Resources Information Center

    Lynch, Clifford; Garcia-Molina, Hector

    1996-01-01

    Summarizes reports and activities at the Information Infrastructure Technology and Applications workshop on digital libraries (Reston, Virginia, August 22, 1995). Defines digital library roles and identifies areas of needed research, including: interoperability; protocols for digital objects; collection management; interface design; human-computer…

  8. The Next Generation of Interoperability Agents in Healthcare

    PubMed Central

    Cardoso, Luciana; Marins, Fernando; Portela, Filipe; Santos, Manuel ; Abelha, António; Machado, José

    2014-01-01

    Interoperability in health information systems is increasingly a requirement rather than an option. Standards and technologies, such as multi-agent systems, have proven to be powerful tools in interoperability issues. In the last few years, the authors have worked on developing the Agency for Integration, Diffusion and Archive of Medical Information (AIDA), which is an intelligent, agent-based platform to ensure interoperability in healthcare units. It is increasingly important to ensure the high availability and reliability of systems. The functions provided by the systems that treat interoperability cannot fail. This paper shows the importance of monitoring and controlling intelligent agents as a tool to anticipate problems in health information systems. The interaction between humans and agents through an interface that allows the user to create new agents easily and to monitor their activities in real time is also an important feature, as health systems evolve by adopting more features and solving new problems. A module was installed in Centro Hospitalar do Porto, increasing the functionality and the overall usability of AIDA. PMID:24840351

  9. Documentation and Reporting of Nutrition - Interoperability, Standards, Practice and Procedures.

    PubMed

    Rotegård, Ann Kristin

    2016-01-01

    Interoperability, fragmentation, standardization and data integrity are key challenges in efforts to improve documentation, streamline reporting and ensure quality of care. This workshop aims at demonstrating and discussing health politics and solutions aimed to improve nutritional status in elderly. PMID:27332331

  10. 47 CFR 90.547 - Narrowband Interoperability channel capability requirement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Frequencies in the 763-775 and 793-805 MHz Bands § 90.547 Narrowband Interoperability channel capability... channels in the 769-775 MHz and 799-805 MHz frequency bands must be capable of operating on all of...

  11. 47 CFR 90.548 - Interoperability Technical Standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 1 CFR part 51. Copies of the standards listed in this section that are incorporated by reference may... the 763-775 and 793-805 MHz Bands § 90.548 Interoperability Technical Standards. (a) Transmitters operating on those narrowband channels in the 769-775 and 799-805 MHz band designated for...

  12. Global Interoperability of Broadband Networks (GIBN): Project Overview

    NASA Technical Reports Server (NTRS)

    DePaula, Ramon P.

    1998-01-01

    Various issues associated with the Global Interoperability of Broadband Networks (GIBN) are presented in viewgraph form. Specific topics include GIBN principles, objectives and goals, and background. GIBN/NASA status, the Transpacific High Definition Video experiment, GIBN experiment selection criteria, satellite industry involvement, and current experiments associated with GIBN are also discussed.

  13. Exploring Interoperability as a Multidimensional Challenge for Effective Emergency Response

    ERIC Educational Resources Information Center

    Santisteban, Hiram

    2010-01-01

    Purpose. The purpose of this research was to further an understanding of how the federal government is addressing the challenges of interoperability for emergency response or crisis management (FEMA, 2009) by informing the development of standards through the review of current congressional law, commissions, studies, executive orders, and…

  14. Toward an Open and Interoperable e-Learning Portal: OEPortal

    ERIC Educational Resources Information Center

    Hsu, Kevin Chihcheng; Yang, Fang-Chuan Ou

    2008-01-01

    With the rapid advance of stand-alone e-learning systems, we believe a sharable and interoperable portal platform capable of integrating various existing learning systems is critical for the future development of e-learning systems. We highlight two problems as the root causes for current ineffective sharing of learning resources: learning object…

  15. Nexus: An interoperability layer for parallel and distributed computer systems

    SciTech Connect

    Foster, I.; Kesselman, C.; Olson, R.; Tuecke, S.

    1994-05-01

    Nexus is a set of services that can be used to implement various task-parallel languages, data-parallel languages, and message-passing libraries. Nexus is designed to permit the efficient portable implementation of individual parallel programming systems and the interoperability of programs developed with different tools. Nexus supports lightweight threading and active message technology, allowing integration of message passing and threads.

  16. Interoperability Is the Foundation for Successful Internet Telephony.

    ERIC Educational Resources Information Center

    Fromm, Larry

    1997-01-01

    More than 40 leading computer and telephony companies have united to lead the charge toward open standards and universal interoperability for Internet telephony products. The voice of IP Forum (VoIP) is working to define technical guidelines for two-party, real-time communications over IP networks, including provisions for compatibility with…

  17. Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng

    2007-01-01

    This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.

  18. Disorders of semantic memory.

    PubMed

    McCarthy, R A; Warrington, E K

    1994-10-29

    It is now established that selective disorders of semantic memory may arise after focal cerebral lesions. Debate and dissension remain on three principal issues: category specificity, the status of modality-dependent knowledge, and the stability and sufficiency of stored information. Theories of category specificity have focused on the frequently reported dissociation between living things and man-made objects. However, other dimensions need theoretical integration. Impairments can be both finer-grain and broader in range. A second variable of importance is stimulus modality. Reciprocal interactive dissociations between vision and language and between animals and objects will be described. These indicate that the derivation of semantic information is constrained by input modality: we appear to have evolved separable databases for the visual and the verbal world. Thirdly, an orthogonal distinction has been drawn between degradation disorders, where representations are insufficient for comprehension, and access deficits, in which representations have become unstable. These issues may have their parallel in the acquisition of knowledge by the developing child. PMID:7886158

  19. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304272

  20. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  1. Semantic Mediation via Access Broker: the OWS-9 experiment

    NASA Astrophysics Data System (ADS)

    Santoro, Mattia; Papeschi, Fabrizio; Craglia, Massimo; Nativi, Stefano

    2013-04-01

    Even with the use of common data models standards to publish and share geospatial data, users may still face semantic inconsistencies when they use Spatial Data Infrastructures - especially in multidisciplinary contexts. Several semantic mediation solutions exist to address this issue; they span from simple XSLT documents to transform from one data model schema to another, to more complex services based on the use of ontologies. This work presents the activity done in the context of the OGC Web Services Phase 9 (OWS-9) Cross Community Interoperability to develop a semantic mediation solution by enhancing the GEOSS Discovery and Access Broker (DAB). This is a middleware component that provides harmonized access to geospatial datasets according to client applications preferred service interface (Nativi et al. 2012, Vaccari et al. 2012). Given a set of remote feature data encoded in different feature schemas, the objective of the activity was to use the DAB to enable client applications to transparently access the feature data according to one single schema. Due to the flexible architecture of the Access Broker, it was possible to introduce a new transformation type in the configured chain of transformations. In fact, the Access Broker already provided the following transformations: Coordinate Reference System (CRS), spatial resolution, spatial extent (e.g., a subset of a data set), and data encoding format. A new software module was developed to invoke the needed external semantic mediation service and harmonize the accessed features. In OWS-9 the Access Broker invokes a SPARQL WPS to retrieve mapping rules for the OWS-9 schemas: USGS, and NGA schema. The solution implemented to address this problem shows the flexibility and extensibility of the brokering framework underpinning the GEO DAB: new services can be added to augment the number of supported schemas without the need to modify other components and/or software modules. Moreover, all other transformations (CRS

  2. Requirements Development for Interoperability Simulation Capability for Law Enforcement

    SciTech Connect

    Holter, Gregory M.

    2004-05-19

    The National Counterdrug Center (NCC) was initially authorized by Congress in FY 1999 appropriations to create a simulation-based counterdrug interoperability training capability. As the lead organization for Research and Analysis to support the NCC, the Pacific Northwest National Laboratory (PNNL) was responsible for developing the requirements for this interoperability simulation capability. These requirements were structured to address the hardware and software components of the system, as well as the deployment and use of the system. The original set of requirements was developed through a process of conducting a user-based survey of requirements for the simulation capability, coupled with an analysis of similar development efforts. The user-based approach ensured that existing concerns with respect to interoperability within the law enforcement community would be addressed. Law enforcement agencies within the designated pilot area of Cochise County, Arizona, were surveyed using interviews and ride-alongs during actual operations. The results of this survey were then accumulated, organized, and validated with the agencies to ensure the accuracy of the results. These requirements were then supplemented by adapting operational requirements from existing systems to ensure system reliability and operability. The NCC adopted a development approach providing incremental capability through the fielding of a phased series of progressively more capable versions of the system. This allowed for feedback from system users to be incorporated into subsequent revisions of the system requirements, and also allowed the addition of new elements as needed to adapt the system to broader geographic and geopolitical areas, including areas along the southwest and northwest U.S. borders. This paper addresses the processes used to develop and refine requirements for the NCC interoperability simulation capability, as well as the response of the law enforcement community to the use of

  3. Ocean Data Interoperability Platform (ODIP): developing a common approach to marine data management

    NASA Astrophysics Data System (ADS)

    Glaves, H.; Schaap, D.

    2013-12-01

    Ecosystem level marine research necessitates that large amounts of interoperable data are readily available for use in a wide range of new and complex multidisciplinary applications. Significant amounts of marine data and information are available throughout the world due to the implementation of e-infrastructures at a regional level to manage and deliver this data to the end user. However, each of these initiatives has been developed to address specific regional requirements and independently of other regions. To establish a common framework for marine data management on a global scale that supports this ecosystem level approach to marine research there is a need to develop interoperability across these existing data infrastructures. To address these issues, the ODIP project is creating a co-ordination platform between a number of these existing regional e-infrastructures which include Rolling Deck to Repository (R2R) in the USA, SeaDataNet and Geo-Seas in Europe, IMOS in Australia and the international IODE initiative. To demonstrate this co-ordinated approach several prototypes will be developed to test and evaluate potential interoperability solutions for solving the incompatibilities identified between the different regional data infrastructures. These prototypes will be used to underpin the development of a common approach to the management of marine data which can also be promoted to the wider marine research community with a view to expanding this framework to include other regional marine data infrastructures. To achieve these objectives relevant domain experts will come together at a series of workshops where areas of commonality between the regional infrastructures will be identified which can then be used as the foundation for the development of the prototype solutions. As a result six topics are currently being addressed by the ODIP project which have been identified and analysed during the first ODIP workshop. These topics are: use of controlled

  4. Interoperability Barriers in NASA Earth Science Data Systems from the Perspective of a Science User (Invited)

    NASA Astrophysics Data System (ADS)

    Kuo, K.

    2010-12-01

    As a practitioner in the field of atmospheric remote sensing, the author, like many other similar science users, depends on and uses heavily NASA Earth Science remote sensing data. Thus the author is asked by the NASA Earth Science Data Information System Project (ESDIS) to assess the capabilities of the Earth Observing System Data and Information System (EOSDIS) in order to provide suggestions and recommendations for the evolution of EOSDIS in the path towards its 2015 Vision Tenets. As NASA's Earth science data system, EOSDIS provides data processing and data archiving and distribution services for EOS missions. The science operations of EOSDIS are the focus of this report, i.e. data archiving and distribution, which are performed within a distributed system of many interconnected nodes, namely the Science Investigator-led Processing Systems, or SIPS, and distributed data centers. Since its inception in the early 1990s, EOSDIS has represented a democratization of data, a break from the past when data dissemination was at the discretion of project scientists. Its “open data” policy is so highly valued and well received by its user communities that it has influenced other agencies, even those of other countries, to adopt the same open policy. In the last ~10 years EOSDIS has matured to serve very well users of any given science community in which the varieties of data being used change infrequently. The unpleasant effects of interoperability barriers are now more often felt by users who try to use new data outside their existing familiar set. This paper first defines interoperability and identifies the purposes for achieving interoperability. The sources of interoperability barriers, classified by the author into software, hardware, and human categories, are examined. For a subset of issues related to software, it presents diagnoses obtained from experience of the author and his survey of the EOSDIS data finding, ordering, retrieving, and extraction services

  5. Mapping the Structure of Semantic Memory

    ERIC Educational Resources Information Center

    Morais, Ana Sofia; Olsson, Henrik; Schooler, Lael J.

    2013-01-01

    Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual's semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals…

  6. Semantic enrichment for medical ontologies.

    PubMed

    Lee, Yugyung; Geller, James

    2006-04-01

    The Unified Medical Language System (UMLS) contains two separate but interconnected knowledge structures, the Semantic Network (upper level) and the Metathesaurus (lower level). In this paper, we have attempted to work out better how the use of such a two-level structure in the medical field has led to notable advances in terminologies and ontologies. However, most ontologies and terminologies do not have such a two-level structure. Therefore, we present a method, called semantic enrichment, which generates a two-level ontology from a given one-level terminology and an auxiliary two-level ontology. During semantic enrichment, concepts of the one-level terminology are assigned to semantic types, which are the building blocks of the upper level of the auxiliary two-level ontology. The result of this process is the desired new two-level ontology. We discuss semantic enrichment of two example terminologies and how we approach the implementation of semantic enrichment in the medical domain. This implementation performs a major part of the semantic enrichment process with the medical terminologies, with difficult cases left to a human expert. PMID:16185937

  7. The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies

    PubMed Central

    2013-01-01

    Background BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. Results The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. Conclusion We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer. PMID:23398680

  8. Semantic Modeling of Requirements: Leveraging Ontologies in Systems Engineering

    ERIC Educational Resources Information Center

    Mir, Masood Saleem

    2012-01-01

    The interdisciplinary nature of "Systems Engineering" (SE), having "stakeholders" from diverse domains with orthogonal facets, and need to consider all stages of "lifecycle" of system during conception, can benefit tremendously by employing "Knowledge Engineering" (KE) to achieve semantic agreement among all…

  9. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  10. Flexible procedural interoperability across security and coalition boundaries using rapidly reconfigurable boundary protection definitions

    NASA Astrophysics Data System (ADS)

    Peach, Nicholas

    2013-05-01

    Existing configuration of boundary protection devices, which validate the content and context of information crossing between security domains, uses a set of accreditor-agreed steps individually agreed for every situation. This has traditionally been a slow and exacting process of negotiation between integrators and accreditors. The Decentralized Operation Procedure (DOP) technique allows interoperability definitions of system interactions to be created as XML files and deployed across the battlefield environment. By extending the security information definitions within the DOP technique, it is intended to provide sufficient incorporated information to allow boundary protection devices to also immediately load and utilize a DOP XML file and then apply established standards of security. This allows boundary devices to be updated with the same dynamism as the deployment of new DOPs and DOP interoperability definitions to also exploit coalitional capabilities having crossed security boundaries. The proposal describes an open and published boundary definition to support the aims of the MOD 23-13 Generic Base Architecture Defense Standard when working with coalition partners. The research aims are; a) to identify each element within a DOP that requires security characteristics to be described; b) create a means to define security characteristics using XML; c) determine whether external validation of an approved DOP requires additional authentication; d) determine the actions that end users will have to perform on boundary protection devices in support of these aims. The paper will present the XML security extensions and the results of a practical implementation achieved through the modification of an existing accredited barrier device.

  11. From autopoiesis to semantic closure.

    PubMed

    Stewart, J

    2000-01-01

    This article addresses the question of providing an adequate mathematical formulation for the concepts of autopoiesis and closure under efficient cause. What is required is metaphorically equivalent to reducing the act of writing to a set of mathematical equations, habitually effected by a human mathematician, within the ongoing function of the system itself. This, in turn, raises the question of the relationship between autopoiesis and semantics. The hypothesis suggested is that whereas semantics clearly requires autopoiesis, it may be also be the case that autopoiesis itself can only be materially realized in a system that is characterized by a semantic dimension. PMID:10818567

  12. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  13. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  14. Does semantic redundancy gain result from multiple semantic priming?

    PubMed

    Schröter, Hannes; Bratzke, Daniel; Fiedler, Anja; Birngruber, Teresa

    2015-10-01

    Fiedler, Schröter, and Ulrich (2013) reported faster responses to a single written word when the semantic content of this word (e.g., "elephant") matched both targets (e.g., "animal", "gray") as compared to a single target (e.g., "animal", "brown"). This semantic redundancy gain was explained by statistical facilitation due to a race of independent memory retrieval processes. The present experiment addresses one alternative explanation, namely that semantic redundancy gain results from multiple pre-activation of words that match both targets. In different blocks of trials, participants performed a redundant-targets task and a lexical decision task. The targets of the redundant-targets task served as primes in the lexical decision task. Replicating the findings of Fiedler et al., a semantic redundancy gain was observed in the redundant-targets task. Crucially, however, there was no evidence of a multiple semantic priming effect in the lexical decision task. This result suggests that semantic redundancy gain cannot be explained by multiple pre-activation of words that match both targets. PMID:26342771

  15. Constructing a semantic predication gold standard from the biomedical literature

    PubMed Central

    2011-01-01

    Background Semantic relations increasingly underpin biomedical text mining and knowledge discovery applications. The success of such practical applications crucially depends on the quality of extracted relations, which can be assessed against a gold standard reference. Most such references in biomedical text mining focus on narrow subdomains and adopt different semantic representations, rendering them difficult to use for benchmarking independently developed relation extraction systems. In this article, we present a multi-phase gold standard annotation study, in which we annotated 500 sentences randomly selected from MEDLINE abstracts on a wide range of biomedical topics with 1371 semantic predications. The UMLS Metathesaurus served as the main source for conceptual information and the UMLS Semantic Network for relational information. We measured interannotator agreement and analyzed the annotations closely to identify some of the challenges in annotating biomedical text with relations based on an ontology or a terminology. Results We obtain fair to moderate interannotator agreement in the practice phase (0.378-0.475). With improved guidelines and additional semantic equivalence criteria, the agreement increases by 12% (0.415 to 0.536) in the main annotation phase. In addition, we find that agreement increases to 0.688 when the agreement calculation is limited to those predications that are based only on the explicitly provided UMLS concepts and relations. Conclusions While interannotator agreement in the practice phase confirms that conceptual annotation is a challenging task, the increasing agreement in the main annotation phase points out that an acceptable level of agreement can be achieved in multiple iterations, by setting stricter guidelines and establishing semantic equivalence criteria. Mapping text to ontological concepts emerges as the main challenge in conceptual annotation. Annotating predications involving biomolecular entities and processes is

  16. Community-Based Services that Facilitate Interoperability and Intercomparison of Precipitation Datasets from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana

    2010-01-01

    perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.

  17. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    PubMed Central

    2011-01-01

    Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP

  18. Active maintenance of semantic representations.

    PubMed

    Nishiyama, Ryoji

    2014-12-01

    In research on verbal working memory, articulatory rehearsal, a maintenance mechanism for phonological representations, has undergone intensive and excellent study. Possible mechanisms for semantic representation have received less attention. However, several studies have reported a double dissociation in types of memory deficits (semantic memory difficulties vs. phonological memory difficulties). This suggests the separability of two maintenance mechanisms. The present study focused on this separability in individuals with normal memory abilities, using a dual-task interference paradigm. The results indicate a crossover interaction between memory and interference task effects: Preventing articulatory rehearsal more strongly disrupted the phonological memory task, whereas performing a tapping task that interfered with attentional control more strongly disrupted semantic memory. These results suggest that semantic representations are actively maintained by a mechanism other than phonological maintenance. PMID:24687734

  19. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  20. Problem Solving with General Semantics.

    ERIC Educational Resources Information Center

    Hewson, David

    1996-01-01

    Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)

  1. Semantic wireless body area networks.

    PubMed

    Nimmala, Venkatarama S R; Penders, Julien; van Hyfte, Dirk; Brands, Michael; Gyselinckx, Bert

    2008-01-01

    In this paper we introduce the concept of semantic Wireless Body Area Network (sWBAN). First the method for semantic interpretation of body sensor data is developed. This method is then illustrated for the case of ECG monitoring, providing the user with real-time monitoring and interpretation of heart activity. Finally, possible extensions of the method to data fusion and context-aware monitoring are discussed. PMID:19163441

  2. NASA and The Semantic Web

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2005-01-01

    We provide an overview of several ongoing NASA endeavors based on concepts, systems, and technology from the Semantic Web arena. Indeed NASA has been one of the early adopters of Semantic Web Technology and we describe ongoing and completed R&D efforts for several applications ranging from collaborative systems to airspace information management to enterprise search to scientific information gathering and discovery systems at NASA.

  3. linkedISA: semantic representation of ISA-Tab experimental metadata

    PubMed Central

    2014-01-01

    Background Reporting and sharing experimental metadata- such as the experimental design, characteristics of the samples, and procedures applied, along with the analysis results, in a standardised manner ensures that datasets are comprehensible and, in principle, reproducible, comparable and reusable. Furthermore, sharing datasets in formats designed for consumption by humans and machines will also maximize their use. The Investigation/Study/Assay (ISA) open source metadata tracking framework facilitates standards-compliant collection, curation, visualization, storage and sharing of datasets, leveraging on other platforms to enable analysis and publication. The ISA software suite includes several components used in increasingly diverse set of life science and biomedical domains; it is underpinned by a general-purpose format, ISA-Tab, and conversions exist into formats required by public repositories. While ISA-Tab works well mainly as a human readable format, we have also implemented a linked data approach to semantically define the ISA-Tab syntax. Results We present a semantic web representation of the ISA-Tab syntax that complements ISA-Tab's syntactic interoperability with semantic interoperability. We introduce the linkedISA conversion tool from ISA-Tab to the Resource Description Framework (RDF), supporting mappings from the ISA syntax to multiple community-defined, open ontologies and capitalising on user-provided ontology annotations in the experimental metadata. We describe insights of the implementation and how annotations can be expanded driven by the metadata. We applied the conversion tool as part of Bio-GraphIIn, a web-based application supporting integration of the semantically-rich experimental descriptions. Designed in a user-friendly manner, the Bio-GraphIIn interface hides most of the complexities to the users, exposing a familiar tabular view of the experimental description to allow seamless interaction with the RDF representation, and visualising

  4. Temporal video segmentation using unsupervised clustering and semantic object tracking

    NASA Astrophysics Data System (ADS)

    Guensel, Bilge; Ferman, Ahmet M.; Tekalp, A. Murat

    1998-07-01

    This paper proposes a content-based temporal video segmentation system that integrates syntactic (domain- independent) and semantic (domain-dependent) features for automatic management of video data. Temporal video segmentation includes scene change detection and shot classification. The proposed scene change detection method consists of two steps: detection and tracking of semantic objects of interest specified by the user, and an unsupervised method for detection of cuts, and edit effects. Object detection and tracking is achieved using a region matching scheme, where the region of interest is defined by the boundary of the object. A new unsupervised scene change detection method based on two-class clustering is introduced to eliminate the data dependency of threshold selection. The proposed shot classification approach relies on semantic image features and exploits domain-dependent visual properties such as shape, color, and spatial configuration of tracked semantic objects. The system has been applied to segmentation and classification of TV programs collected from different channels. Although the paper focuses on news programs, the method can easily be applied to other TV programs with distinct semantic structure.

  5. COTARD SYNDROME IN SEMANTIC DEMENTIA

    PubMed Central

    Mendez, Mario F.; Ramírez-Bermúdez, Jesús

    2011-01-01

    Background Semantic dementia is a neurodegenerative disorder characterized by the loss of meaning of words or concepts. semantic dementia can offer potential insights into the mechanisms of content-specific delusions. Objective The authors present a rare case of semantic dementia with Cotard syndrome, a delusion characterized by nihilism or self-negation. Method The semantic deficits and other features of semantic dementia were evaluated in relation to the patient's Cotard syndrome. Results Mrs. A developed the delusional belief that she was wasting and dying. This occurred after she lost knowledge for her somatic discomforts and sensations and for the organs that were the source of these sensations. Her nihilistic beliefs appeared to emerge from her misunderstanding of her somatic sensations. Conclusion This unique patient suggests that a mechanism for Cotard syndrome is difficulty interpreting the nature and source of internal pains and sensations. We propose that loss of semantic knowledge about one's own body may lead to the delusion of nihilism or death. PMID:22054629

  6. Semantic preview benefit during reading.

    PubMed

    Hohenstein, Sven; Kliegl, Reinhold

    2014-01-01

    Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word during the saccade to the preview (boundary paradigm). In the 1st experiment the preview word was semantically related or unrelated to the target. Fixation durations on the target were shorter for semantically related than unrelated previews, consistent with a semantic preview benefit. In the 2nd experiment, half the sentences were presented following the rules of German spelling (i.e., previews and targets were printed with an initial capital letter), and the other half were presented completely in lowercase. A semantic preview benefit was obtained under both conditions. In the 3rd experiment, we introduced 2 further preview conditions, an identical word and a pronounceable nonword, while also manipulating the text contrast. Whereas the contrast had negligible effects, fixation durations on the target were reliably different for all 4 types of preview. Semantic preview benefits were greater for pretarget fixations closer to the boundary (large preview space) and, although not as consistently, for long pretarget fixation durations (long preview time). The results constrain theoretical proposals about eye movement control in reading. (PsycINFO Database Record (c) 2013 APA, all rights reserved). PMID:23895448

  7. Interoperability between Publications, Reference Data and Visualisation Tools

    NASA Astrophysics Data System (ADS)

    Allen, Mark G.; Ocvirk, Pierre; Genova, Francoise

    2015-08-01

    Astronomy research is becoming more and more inter-connected, and there is a high expectation for our publications, reference data and tools to be interoperable. Publications are the hard earned final results of scientific endeavour, and technology allows us to enable publications as useable resources, going beyond their traditional role as a readable document. There is strong demand for simple access to the data associated with publications, and that links and references in publications are strongly connected to online resources, and are useable in visualisation tools. We highlight the capabilities of the CDS reference services for interoperability between the reference data obtained from publications, the connections between Journal and literature services, and combination of these data and information in Aladin and other CDS services. (In support of the abstract submitted by P. Ocvirk)

  8. Reconfigurable point-of-care systems designed with interoperability standards.

    PubMed

    Warren, Steve; Yao, Jianchu; Schmitz, Ryan; Lebak, Jeff

    2004-01-01

    Interoperability standards, if properly applied to medical system design, have the potential to decrease the cost of point-of-care monitoring systems while better matching systems to patient needs. This paper presents a brief editorial overview of future monitoring environments, followed by a short listing of smart-home and wearable-device efforts. This is followed by a summary of recent efforts in the Medical Component Design Laboratory at Kansas State University to address interoperability issues in point-of-care systems by incorporating the Bluetooth Host Controller Interface, the IEEE 1073 Medical Information Bus, and Health Level 7 (HL7) into a monitoring system that hosts wearable or nearby wireless devices. This wireless demonstration system includes a wearable electrocardiogram, wearable pulse oximeter, wearable data logger, weight scale, and LabVIEW base station. Data are exchanged between local and remote MySQL databases using the HL7 standard for medical information exchange. PMID:17270979

  9. Interoperable mesh and geometry tools for advanced petascale simulations

    SciTech Connect

    Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M; Tautges, T; Trease, H

    2007-07-04

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and datastructure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications.

  10. A STANAG for NATO imagery interoperable data links

    NASA Astrophysics Data System (ADS)

    Peckham, H. M.

    1993-12-01

    NATO, under the direction of Air Group IV (A/C 224) of the Air Force Armament Group is writing a Standardization Agreement (STANAG) for an Imagery Interoperable Data Link. This is the last segment of the NATO Imagery Interoperable Architecture (NIIA) to be completed. This paper will briefly the background of the development of the NIIA and the inter-relationship of the three segments, and then describe the approach being taken to the preparation of the data link STANAG. The concept of the data link described by a layered model using Open Systems Interconnect concepts to define interfaces between the layers will be discussed and then the specific interfaces being used for the STANAG development will be described.

  11. Interoperability And Value Added To Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Gasperi, J.

    2012-04-01

    Geospatial web services technology has provided a new means for geospatial data interoperability. Open Geospatial Consortium (OGC) services such as Web Map Service (WMS) to request maps on the Internet, Web Feature Service (WFS) to exchange vectors or Catalog Service for the Web (CSW) to search for geospatialized data have been widely adopted in the Geosciences community in general and in the remote sensing community in particular. These services make Earth Observation data available to a wider range of public users than ever before. The mapshup web client offers an innovative and efficient user interface that takes advantage of the power of interoperability. This presentation will demonstrate how mapshup can be effectively used in the context of natural disasters management.

  12. Smartfiles: An OO approach to data file interoperability

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Vanrosendale, John

    1995-01-01

    Data files for scientific and engineering codes typically consist of a series of raw data values whose descriptions are buried in the programs that interact with these files. In this situation, making even minor changes in the file structure or sharing files between programs (interoperability) can only be done after careful examination of the data file and the I/O statement of the programs interacting with this file. In short, scientific data files lack self-description, and other self-describing data techniques are not always appropriate or useful for scientific data files. By applying an object-oriented methodology to data files, we can add the intelligence required to improve data interoperability and provide an elegant mechanism for supporting complex, evolving, or multidisciplinary applications, while still supporting legacy codes. As a result, scientists and engineers should be able to share datasets with far greater ease, simplifying multidisciplinary applications and greatly facilitating remote collaboration between scientists.

  13. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  14. Technical Data Interoperability (TDI) Pathfinder Via Emerging Standards

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Gill, Paul; Hill, Bradley; Ibach, Brandon; Jones, Corey; Ungar, David; Barch, Jeffrey; Ingalls, John; Jacoby, Joseph; Manning, Josh; Bengtsson, Kjell; Falls, Mark; Kent, Peter; Heath, Shaun; Kennedy, Steven

    2014-01-01

    The TDI project (TDI) investigates trending technical data standards for applicability to NASA vehicles, space stations, payloads, facilities, and equipment. TDI tested COTS software compatible with a certain suite of related industry standards for capabilities of individual benefits and interoperability. These standards not only esnable Information Technology (IT) efficiencies, but also address efficient structures and standard content for business processes. We used source data from generic industry samples as well as NASA and European Space Agency (ESA) data from space systems.

  15. Secure and interoperable communication infrastructures for PPDR organisations

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Marques, Hugo; Pereira, Luis; Rodriguez, Jonathan; Brouwer, Frank; Bouwers, Bert; Politis, Ilias; Lykourgiotis, Asimakis; Ladas, Alexandros; Adigun, Olayinka; Jelenc, David

    2016-05-01

    The growing number of events affecting public safety and security (PS&S) on a regional scale with potential to grow up to large scale cross border disasters puts an increased pressure on agencies and organisation responsible for PS&S. In order to respond timely and in an adequate manner to such events, Public Protection and Disaster Relief (PPDR) organisations need to cooperate, align their procedures and activities, share the needed information and be interoperable. Existing PPDR/PMR technologies such as TETRA, TETRAPOL or P25, do not currently provide broadband capability nor is expected such technologies to be upgraded in the future. This presents a major limitation in supporting new services and information flows. Furthermore, there is no known standard that addresses interoperability of these technologies. In this contribution the design of a next generation communication infrastructure for PPDR organisations which fulfills the requirements of secure and seamless end-to-end communication and interoperable information exchange within the deployed communication networks is presented. Based on Enterprise Architecture of PPDR organisations, a next generation PPDR network that is backward compatible with legacy communication technologies is designed and implemented, capable of providing security, privacy, seamless mobility, QoS and reliability support for mission-critical Private Mobile Radio (PMR) voice and broadband data services. The designed solution provides a robust, reliable, and secure mobile broadband communications system for a wide variety of PMR applications and services on PPDR broadband networks, including the ability of inter-system, interagency and cross-border operations with emphasis on interoperability between users in PMR and LTE.

  16. On the feasibility of interoperable schemes in hand biometrics.

    PubMed

    Morales, Aythami; González, Ester; Ferrer, Miguel A

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  17. On the Feasibility of Interoperable Schemes in Hand Biometrics

    PubMed Central

    Morales, Aythami; González, Ester; Ferrer, Miguel A.

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  18. Web Image Re-Ranking UsingQuery-Specific Semantic Signatures.

    PubMed

    Wang, Xiaogang; Qiu, Shi; Liu, Ke; Tang, Xiaoou

    2014-04-01

    Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images are first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images' semantic meanings which interpret users' search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods. PMID:26353202

  19. Advances in Using Opensearch for Earth Science Data Discovery and Interoperability

    NASA Astrophysics Data System (ADS)

    Newman, D. J.; Mitchell, A. E.

    2014-12-01

    As per www.opensearch.org: OpenSearch is a collection of simple formats for the sharing of search results A number of organizations (NASA, ESA, CEOS) have began to adopt this standard as a means of allowing both the discovery of earth science data and the aggregation of results from disparate data archives. OpenSearch has proven to be simpler and more effective at achieving these goals than previous efforts (Catalog Service for the web for example). This talk will outline: The basic ideas behind OpenSearch The ways in which we have extended the basic specification to accomodate the Earth Science use case (two-step searching, relevancy ranking, facets) A case-study of the above in action (CWICSmart + IDN OpenSearch + CWIC OpenSearch) The potential for interoperability this simple standard affords A discussion of where we can go in the future

  20. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms

    PubMed Central

    Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-01-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890

  1. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms.

    PubMed

    Weber, Griffin M; Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-12-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890

  2. Progress Toward Standards for the Seamless Interoperability of Broadband Satellite Communication Networks

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Glover, Daniel R.; vonDeak, Thomas C.; Bhasin, Kul B.

    1998-01-01

    The realization of the full potential of the National Information Infrastructure (NH) and Global Information Infrastructure (GII) requires seamless interoperability of emerging satellite networks with terrestrial networks. This requires a cooperative effort between industry, academia and government agencies to develop and advocate new, satellite-friendly communication protocols and modifications to existing communication protocol standards. These groups have recently come together to actively participating in a number of standards making bodies including: the Internet Engineering Task Force (IETF), the Asynchronous Transfer Mode (ATM) Forum, the International Telecommunication Union (ITU) and the Telecommunication Industry Association MA) to ensure that issues regarding efficient use of these protocols over satellite links are not overlooked. This paper will summarize the progress made toward standards development to achieve seamless integration and accelerate the deployment of multimedia applications.

  3. Use of Annotations for Component and Framework Interoperability

    NASA Astrophysics Data System (ADS)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  4. Schoolbook Texts: Behavioral Achievement Priming in Math and Language

    PubMed Central

    Engeser, Stefan; Baumann, Nicola; Baum, Ingrid

    2016-01-01

    Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications. PMID:26938446

  5. SOA approach to battle command: simulation interoperability

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.

    2010-04-01

    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  6. Semantic Similarity in Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Faria, Daniel; Falcão, André O.; Lord, Phillip; Couto, Francisco M.

    2009-01-01

    In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research. PMID:19649320

  7. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications. PMID:15923766

  8. Policy Issues in Accessibility and Interoperability of Scientific Data: Experiences from the Carbon Modeling Field

    NASA Astrophysics Data System (ADS)

    Kishor, P.; Peckham, S. D.; Gower, S. T.; Batzli, S.

    2010-12-01

    Large-scale terrestrial ecosystem modeling is highly parameterized, and requires lots of historical data. Routine model runs can easily utlize hundreds of Gigabytes, even Terabytes of data on tens, perhaps hundreds of parameters. It is a given that no one modeler can or does collect all the required data. All modelers depend upon other scientists, and governmental and research agencies for their data needs. This is where data accessibility and interoperability become crucial for the success of the project. Having well-documented and quality data available in a timely fashion can greatly assist a project's progress, while the converse can bring the project to a standstill, leading to a large amount of wasted staff time and resources. Data accessibility is a complex issue -- at best, it is an unscientific composite of a variety of factors: technological, legal, cultural, semantic, and economic. In reality, it is a concept that most scientists only worry about when they need some data, and mostly never after their project is complete. The exigencies of the vetting, review and publishing processes overtake the long-term view of making one's own data available to others with the same ease and openness that was desired when seeking data from others. This presentation describes our experience with acquiring data for our carbon modeling efforts, dealing with federal, state and local agencies, variety of data formats, some published, some not so easy to find, and documentation that ranges from excellent to non-existent. A set of indicators are proposed to place and determine the accessibility of scientific data -- those we are seeking and those we are producing -- in order to bring some transparency and clarity that can make data acquisition and sharing easier. The paper concludes with a proposal to utilize a free, open and well-recognized data marks such as CC0 (CC-Zero), Public Domain Dedication License, and CC-BY created by Creative Commons that would advertize the

  9. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The

  10. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  11. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. PMID:23188758

  12. A Semantic Relatedness Approach for Traceability Link Recovery

    SciTech Connect

    Mahmoud, Anas M.; Niu, Nan; Xu, Songhua

    2012-01-01

    Human analysts working with automated tracing tools need to directly vet candidate traceability links in order to determine the true traceability information. Currently, human intervention happens at the end of the traceability process, after candidate traceability links have already been generated. This often leads to a decline in the results accuracy. In this paper, we propose an approach, based on semantic relatedness (SR), which brings human judgment to an earlier stage of the tracing process by integrating it into the underlying retrieval mechanism. SR tries to mimic human mental model of relevance by considering a broad range of semantic relations, hence producing more semantically meaningful results. We evaluated our approach using three datasets from different application domains, and assessed the tracing results via six different performance measures concerning both result quality and browsability. The empirical evaluation results show that our SR approach achieves a significantly better performance in recovering true links than a standard Vector Space Model (VSM) in all datasets. Our approach also achieves a significantly better precision than Latent Semantic Indexing (LSI) in two of our datasets.

  13. Leveraging electronic healthcare record standards and semantic web technologies for the identification of patient cohorts

    PubMed Central

    Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto; Marcos, Mar; Legaz-García, María del Carmen; Moner, David; Torres-Sospedra, Joaquín; Esteban-Gil, Angel; Martínez-Salvador, Begoña; Robles, Montserrat

    2013-01-01

    Background The secondary use of electronic healthcare records (EHRs) often requires the identification of patient cohorts. In this context, an important problem is the heterogeneity of clinical data sources, which can be overcome with the combined use of standardized information models, virtual health records, and semantic technologies, since each of them contributes to solving aspects related to the semantic interoperability of EHR data. Objective To develop methods allowing for a direct use of EHR data for the identification of patient cohorts leveraging current EHR standards and semantic web technologies. Materials and methods We propose to take advantage of the best features of working with EHR standards and ontologies. Our proposal is based on our previous results and experience working with both technological infrastructures. Our main principle is to perform each activity at the abstraction level with the most appropriate technology available. This means that part of the processing will be performed using archetypes (ie, data level) and the rest using ontologies (ie, knowledge level). Our approach will start working with EHR data in proprietary format, which will be first normalized and elaborated using EHR standards and then transformed into a semantic representation, which will be exploited by automated reasoning. Results We have applied our approach to protocols for colorectal cancer screening. The results comprise the archetypes, ontologies, and datasets developed for the standardization and semantic analysis of EHR data. Anonymized real data have been used and the patients have been successfully classified by the risk of developing colorectal cancer. Conclusions This work provides new insights in how archetypes and ontologies can be effectively combined for EHR-driven phenotyping. The methodological approach can be applied to other problems provided that suitable archetypes, ontologies, and classification rules can be designed. PMID:23934950

  14. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care

    PubMed Central

    Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947

  15. Data federation in the Biomedical Informatics Research Network: tools for semantic annotation and query of distributed multiscale brain data.

    PubMed

    Bug, William; Astahkov, Vadim; Boline, Jyl; Fennema-Notestine, Christine; Grethe, Jeffrey S; Gupta, Amarnath; Kennedy, David N; Rubin, Daniel L; Sanders, Brian; Turner, Jessica A; Martone, Maryann E

    2008-01-01

    The broadly defined mission of the Biomedical Informatics Research Network (BIRN, www.nbirn.net) is to better understand the causes human disease and the specific ways in which animal models inform that understanding. To construct the community-wide infrastructure for gathering, organizing and managing this knowledge, BIRN is developing a federated architecture for linking multiple databases across sites contributing data and knowledge. Navigating across these distributed data sources requires a shared semantic scheme and supporting software framework to actively link the disparate repositories. At the core of this knowledge organization is BIRNLex, a formally-represented ontology facilitating data exchange. Source curators enable database interoperability by mapping their schema and data to BIRNLex semantic classes thereby providing a means to cast BIRNLex-based queries against specific data sources in the federation. We will illustrate use of the source registration, term mapping, and query tools. PMID:18999211

  16. The semantics of biological forms.

    PubMed

    Albertazzi, Liliana; Canal, Luisa; Dadam, James; Micciolo, Rocco

    2014-01-01

    This study analyses how certain qualitative perceptual appearances of biological forms are correlated with expressions of natural language. Making use of the Osgood semantic differential, we presented the subjects with 32 drawings of biological forms and a list of 10 pairs of connotative adjectives to be put in correlations with them merely by subjective judgments. The principal components analysis made it possible to group the semantics of forms according to two distinct axes of variability: harmony and dynamicity. Specifically, the nonspiculed, nonholed, and flat forms were perceived as harmonic and static; the rounded ones were harmonic and dynamic. The elongated forms were somewhat disharmonious and somewhat static. The results suggest the existence in the general population of a correspondence between perceptual and semantic processes, and of a nonsymbolic relation between visual forms and their adjectival expressions in natural language. PMID:25669053

  17. Data Access Services interoperability in the Geosciences by means of the GI-axe Brokering Framework

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Santoro, Mattia; Papeschi, Fabrizio; Nativi, Stefano

    2013-04-01

    Many software tools are in use in the different Geosciences domains to the aim of publishing, accessing, evaluating and using available datasets in a service based environment. These tools/services are often domain-specific and usually support a small and disciplinary set of protocols and data models. On the other hand, multidisciplinary applications need to access many of these tools/services belonging to different domains in order to retrieve heterogeneous datasets (e.g. satellite acquired gridded coverages and in-situ sensor time series), then "uniformly process them" and achieve a deeper insight. Moreover datasets, to be easily processed, should be available according to a given Common Grid Environment (CGE): i.e. a geospatial environment characterized by a common spatio-temporal CRS (Coordinate Reference System), resolution, extension and by a common format encoding. Now, the interoperability effort needed by multidisciplinary applications is ordinarily in charge of data providers servers or user clients: in both cases, this represents a high entry barrier. The GI-axe Access Broker addresses this interoperability issue by taking charge of the needed implementation effort. It acts as an intermediation service between the User Clients and the Data Provider Services, placing itself in a third party (Broker) Layer. Indeed the Access Broker can access datasets available through well known access services in use by the Geosciences communities (e.g. OGC WCS, WMS, WFS, OPeNDAP, FTP, REST APIs, …) and republish them according to the application client interfaces. Moreover, GI-axe transforms datasets according to the a CGE specified by Users. In doing so it may resort to external processing services already in use by the community, supplementing the functionalities already supported by the data provider services. The external processing services list can be configured by Users. GI-axe is also a flexible framework, composed of extensible components. This architecture

  18. Chinese Character Decoding: A Semantic Bias?

    ERIC Educational Resources Information Center

    Williams, Clay; Bever, Thomas

    2010-01-01

    The effects of semantic and phonetic radicals on Chinese character decoding were examined. Our results suggest that semantic and phonetic radicals are each available for access when a corresponding task emphasizes one or the other kind of radical. But in a more neutral lexical recognition task, the semantic radical is more informative. Semantic…

  19. Examining Lateralized Semantic Access Using Pictures

    ERIC Educational Resources Information Center

    Lovseth, Kyle; Atchley, Ruth Ann

    2010-01-01

    A divided visual field (DVF) experiment examined the semantic processing strategies employed by the cerebral hemispheres to determine if strategies observed with written word stimuli generalize to other media for communicating semantic information. We employed picture stimuli and vary the degree of semantic relatedness between the picture pairs.…

  20. Semantic and Visual Memory After Alcohol Abuse.

    ERIC Educational Resources Information Center

    Donat, Dennis C.

    1986-01-01

    Compared the relative performance of 40 patients with a history of alcohol abuse on tasks of short-term semantic and visual memory. Performance on the visual memory tasks was impaired significantly relative to the semantic memory task in a within-subjects analysis of variance. Semantic memory was unimpaired. (Author/ABB)

  1. Semantic Weight and Verb Retrieval in Aphasia

    ERIC Educational Resources Information Center

    Barde, Laura H. F.; Schwartz, Myrna F.; Boronat, Consuelo B.

    2006-01-01

    Individuals with agrammatic aphasia may have difficulty with verb production in comparison to nouns. Additionally, they may have greater difficulty producing verbs that have fewer semantic components (i.e., are semantically "light") compared to verbs that have greater semantic weight. A connectionist verb-production model proposed by Gordon and…

  2. Semantic Relatedness for Evaluation of Course Equivalencies

    ERIC Educational Resources Information Center

    Yang, Beibei

    2012-01-01

    Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the…

  3. Metasemantics: On the Limits of Semantic Theory

    ERIC Educational Resources Information Center

    Parent, T.

    2009-01-01

    METASEMANTICS is a wake-up call for semantic theory: It reveals that some semantic questions have no adequate answer. (This is meant to be the "epistemic" point that certain semantic questions cannot be "settled"--not a metaphysical point about whether there is a fact-of-the-matter.) METASEMANTICS thus checks our default "optimism" that any…

  4. Bootstrapping to a Semantic Grid

    SciTech Connect

    Schwidder, Jens; Talbott, Tara; Myers, James D.

    2005-02-28

    The Scientific Annotation Middleware (SAM) is a set of components and services that enable researchers, applications, problem solving environments (PSE) and software agents to create metadata and annotations about data objects and document the semantic relationships between them. Developed starting in 2001, SAM allows applications to encode metadata within files or to manage metadata at the level of individual relationships as desired. SAM then provides mechanisms to expose metadata and relation¬ships encoded either way as WebDAV properties. In this paper, we report on work to further map this metadata into RDF and discuss the role of middleware such as SAM in bridging between traditional and semantic grid applications.

  5. Geospatial Data Provenance in the Semantic Web Environment

    NASA Astrophysics Data System (ADS)

    di, L.; Yue, P.

    2008-12-01

    Geospatial data will grow to multi-exabytes very soon. The major form of geospatial data is imagery collected by the Earth observing community through remote sensing methods. Those data, along with their derived products and model outputs, are archived in many data centers around the world. Geospatial data has to be converted to user-specific information and knowledge before they become useful. Such a user-specific information and knowledge is normally derived from multi-source data through a set of geoprocess steps. Recent technology advances in the united representation of geospatial data, information, and knowledge, the geospatial semantic web, the geospatial interoperability, and the artificial intelligence have made the automatic derivation of user-specific information and knowledge from diverse data sources in the web service environment possible. A prototype system for proofing such technologies has been constructed and successfully demonstrated. An operational systems is being development. With the ontology support, the system automatically constructs the executable workflow based on users' descriptions of what they want and the available services and the input data over the web, and execute the workflow to generate the user- specific product. In order for users to have the confidence to use such automatically generated products in real applications, complete and accurate provenance information must be provided to users, even before such user-specific products are generated. In this presentation, we will discuss the representation of geospatial data provenance, the automatic capturing of geospatial data provenance in the semantic web environment, and the management of geospatial data provenance. We will also discuss a prototype provenance management system that allows the users to query and access providence information.

  6. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  7. International Planetary Science Interoperability: The Venus Express Interface Prototype

    NASA Astrophysics Data System (ADS)

    Sanford Bussard, Stephen; Chanover, N.; Huber, L.; Trejo, I.; Hughes, J. S.; Kelly, S.; Guinness, E.; Heather, D.; Salgado, J.; Osuna, P.

    2009-09-01

    NASA's Planetary Data System (PDS) and ESA's Planetary Science Archive (PSA) have successfully demonstrated interoperability between planetary science data archives with the Venus Express (VEX) Interface prototype. Because VEX is an ESA mission, there is no memorandum of understanding to archive the data in the PDS. However, using a common communications protocol and common data standards, VEX mission science data ingested into the PSA can be accessed from a user interface at the Atmospheres Node of the PDS, making the science data accessible globally through two established planetary science data portals. The PSA makes scientific and engineering data from ESA's planetary missions accessible to the worldwide scientific community. The PSA consists of online services incorporating search, preview, download, notification and delivery basket functionality. Mission data included in the archive aside from VEX include data from the Giotto, Mars Express, Smart-1, Huygens, and Rosetta spacecraft and several ground-based cometary observations. All data are compatible to the Planetary Data System data standard. The PDS archives and distributes scientific data from NASA planetary missions, astronomical observations, and laboratory measurements. The PDS is sponsored by NASA's Science Mission Directorate. Its purpose is to ensure the long-term usability of NASA data and to stimulate advanced research. The architecture of the VEX prototype interface leverages components from both the PSA and PDS information system infrastructures, a user interface developed at the New Mexico State University, and the International Planetary Data Alliance (IPDA) Planetary Data Access Protocol (PDAP). The VEX Interoperability Project was a key project of the IPDA, whose objective is to ensure world-wide access to planetary data regardless of which agency collects and archives the data. A follow-on IPDA project will adapt the VEX Interoperability protocol for access in JAXA to the Venus Climate

  8. Progress of Interoperability in Planetary Research for Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Hare, T. M.; Gaddis, L. R.

    2015-12-01

    For nearly a decade there has been a push in the planetary science community to support interoperable methods of accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (i.e., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized image formats that retain geographic information (e.g., GeoTiff, GeoJpeg2000), digital geologic mapping conventions, planetary extensions for symbols that comply with U.S. Federal Geographic Data Committee cartographic and geospatial metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they have been modified to support the planetary domain. The motivation to support common, interoperable data format and delivery standards is not only to improve access for higher-level products but also to address the increasingly distributed nature of the rapidly growing volumes of data. The strength of using an OGC approach is that it provides consistent access to data that are distributed across many facilities. While data-steaming standards are well-supported by both the more sophisticated tools used in Geographic Information System (GIS) and remote sensing industries, they are also supported by many light-weight browsers which facilitates large and small focused science applications and public use. Here we provide an

  9. Making OGC standards work - interoperability testing between meteorological web services

    NASA Astrophysics Data System (ADS)

    Siemen, Stephan; Little, Chris; Voidrot, Marie-Françoise

    2015-04-01

    The Meteorology and Oceanography Domain Working Group (Met Ocean DWG) is a community orientated working group of the Open Geospatial Consortium (OGC). The group does not directly revise OGC standards, but rather enables collaboration and communication between groups with meteorological and oceanographic interests. The Met Ocean DWG maintains a list of topics of interest to the meteorological and oceanographic communities for discussion, prioritises activities, defining feedback to the OGC Standards Working Groups (SWG), and performing interoperability experiments. One of the activities of the MetOcean DWG is the definition of Best Practices documents for common OGC standards, such as WMS and WCS. This is necessary since meteorological data has additional complexities in time, elevation and multi models runs including ensembles. To guarantee interoperability in practice it is important to test each other systems and ensure standards are implemented correctly, but also make recommendations to the DWG on the establishment of Best Practices guides. The European Working Group on Operational meteorological Workstations (EGOWS) was founded in 1990 as an informal forum for people working in the development field of operational meteorological workstations. The annual EGOWS meeting offers an excellent platform for exchanging information and furthering co-operation among the experts from NMS's, ECMWF and other institutes in the work with OGC standards. The presentation will give an update of the testing, which was being done during the June 2014 EGOWS meeting in Oslo and what has happen since. The presenter will also give an overview of the online resources to follow the tests and how interested parties can contribute to future interoperability tests.

  10. Building Interoperable Learning Objects Using Reduced Learning Object Metadata

    ERIC Educational Resources Information Center

    Saleh, Mostafa S.

    2005-01-01

    The new e-learning generation depends on Semantic Web technology to produce learning objects. As the production of these components is very costly, they should be produced and registered once, and reused and adapted in the same context or in other contexts as often as possible. To produce those components, developers should use learning standards…

  11. Semantic Web repositories for genomics data using the eXframe platform

    PubMed Central

    2014-01-01

    Background With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. Methods To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Conclusions Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge. PMID:25093072

  12. Extracting semantic lexicons from discharge summaries using machine learning and the C-Value method.

    PubMed

    Jiang, Min; Denny, Josh C; Tang, Buzhou; Cao, Hongxin; Xu, Hua

    2012-01-01

    Semantic lexicons that link words and phrases to specific semantic types such as diseases are valuable assets for clinical natural language processing (NLP) systems. Although terminological terms with predefined semantic types can be generated easily from existing knowledge bases such as the Unified Medical Language Systems (UMLS), they are often limited and do not have good coverage for narrative clinical text. In this study, we developed a method for building semantic lexicons from clinical corpus. It extracts candidate semantic terms using a conditional random field (CRF) classifier and then selects terms using the C-Value algorithm. We applied the method to a corpus containing 10 years of discharge summaries from Vanderbilt University Hospital (VUH) and extracted 44,957 new terms for three semantic groups: Problem, Treatment, and Test. A manual analysis of 200 randomly selected terms not found in the UMLS demonstrated that 59% of them were meaningful new clinical concepts and 25% were lexical variants of exiting concepts in the UMLS. Furthermore, we compared the effectiveness of corpus-derived and UMLS-derived semantic lexicons in the concept extraction task of the 2010 i2b2 clinical NLP challenge. Our results showed that the classifier with corpus-derived semantic lexicons as features achieved a better performance (F-score 82.52%) than that with UMLS-derived semantic lexicons as features (F-score 82.04%). We conclude that such corpus-based methods are effective for generating semantic lexicons, which may improve named entity recognition tasks and may aid in augmenting synonymy within existing terminologies. PMID:23304311

  13. Knowledge representation and management enabling intelligent interoperability - principles and standards.

    PubMed

    Blobel, Bernd

    2013-01-01

    Based on the paradigm changes for health, health services and underlying technologies as well as the need for at best comprehensive and increasingly automated interoperability, the paper addresses the challenge of knowledge representation and management for medical decision support. After introducing related definitions, a system-theoretical, architecture-centric approach to decision support systems (DSSs) and appropriate ways for representing them using systems of ontologies is given. Finally, existing and emerging knowledge representation and management standards are presented. The paper focuses on the knowledge representation and management part of DSSs, excluding the reasoning part from consideration. PMID:23542959

  14. ODIP - Ocean Data Interoperability Platform - developing interoperabilty Pilot project 1

    NASA Astrophysics Data System (ADS)

    Schaap, D.

    2014-12-01

    Europe, the USA, Australia and IOC/IODE are making significant progress in facilitating the discovery and access of marine data through the development, implementation, population and operation of national, regional or international distributed ocean and marine observing and data management infrastructures such as SeaDataNet, Geo-Seas, IOOS, the Australian Ocean Portal and the IODE Ocean Data Portal. All of these developments are resulting in the development and implementation of standards for the formats of metadata, data, data products, quality control methods and flags, common vocabularies. They are also providing services for data discovery, viewing and downloading, and software tools for editing, conversions, communication, analysis and presentation, all of which are increasingly being adopted and used by their national and regional marine communities.The Ocean Data Interoperability Platform (ODIP)project is supported by the EU FP7 Research Infrastructures programme, National Science Foundation (USA) and Australian government and has started 1st October 2012. ODIP includes all the major organisations engaged in ocean data management in EU, US, and Australia. ODIP is also supported by the IOC -IODE, closely linking this activity with its Ocean Data Portal (ODP) and Ocean Data Standards (ODS) projects.The ODIP platform aims to ease interoperability between the regional marine data management infrastructures. Therefore it facilitates an organised dialogue between the key infrastructure representatives by means of publishing best practice, organising international workshops and fostering the development of common standards and interoperability solutions. These are evaluated and tested by means of prototype projects. The ODIP Prototype project 1 aims at establishing interoperability between the regional EU, USA and Australia data discovery and access services (SeaDataNet CDI, US NODC, and IMOS MCP) and contributing to the global GEOSS and IODE-ODP Portals. Use is

  15. Language interoperability mechanisms for high-performance scientific applications

    SciTech Connect

    Cleary, A; Kohn, S; Smith, S G; Smolinski, B

    1998-09-18

    Language interoperability is a difficult problem facing the developers and users of large numerical software packages. Language choices often hamper the reuse and sharing of numerical libraries, especially in a scientific computing environment that uses a breadth of programming languages, including C, c ++, Java, various Fortran dialects, and scripting languages such as Python. In this paper, we propose a new approach to langauge interoparability for high-performance scientific applications based on Interface Definition Language (IDL) techniques. We investigate the modifications necessary to adopt traditional IDL approaches for use by the scientific community, including IDL extensions for numerical computing and issues involved in mapping IDLs to Fortran 77 and Fortran 90.

  16. Leveraging Python Interoperability Tools to Improve Sapphire's Usability

    SciTech Connect

    Gezahegne, A; Love, N S

    2007-12-10

    The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.

  17. CCSDS Spacecraft Monitor and Control Mission Operations Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Lucord, Steve; Martinez, Lindolfo

    2009-01-01

    We are entering a new era in space exploration. Reduced operating budgets require innovative solutions to leverage existing systems to implement the capabilities of future missions. Custom solutions to fulfill mission objectives are no longer viable. Can NASA adopt international standards to reduce costs and increase interoperability with other space agencies? Can legacy systems be leveraged in a service oriented architecture (SOA) to further reduce operations costs? The Operations Technology Facility (OTF) at the Johnson Space Center (JSC) is collaborating with Deutsches Zentrum fur Luft- und Raumfahrt (DLR) to answer these very questions. The Mission Operations and Information Management Services Area (MOIMS) Spacecraft Monitor and Control (SM&C) Working Group within the Consultative Committee for Space Data Systems (CCSDS) is developing the Mission Operations standards to address this problem space. The set of proposed standards presents a service oriented architecture to increase the level of interoperability among space agencies. The OTF and DLR are developing independent implementations of the standards as part of an interoperability prototype. This prototype will address three key components: validation of the SM&C Mission Operations protocol, exploration of the Object Management Group (OMG) Data Distribution Service (DDS), and the incorporation of legacy systems in a SOA. The OTF will implement the service providers described in the SM&C Mission Operation standards to create a portal for interaction with a spacecraft simulator. DLR will implement the service consumers to perform the monitor and control of the spacecraft. The specifications insulate the applications from the underlying transport layer. We will gain experience with a DDS transport layer as we delegate responsibility to the middleware and explore transport bridges to connect disparate middleware products. A SOA facilitates the reuse of software components. The prototype will leverage the

  18. Exploiting and developing interoperability between multidisciplinary environmental research infrastructures in Europe - step toward international collaboration

    NASA Astrophysics Data System (ADS)

    Sorvari, S.; Asmi, A.; Konijn, J.; Pursula, A.; Los, W.; Laj, P.; Kutsch, W. L.

    2014-12-01

    Environmental Research infrastructures are long-term facilities, resources, and related services that are used by research communities to conduct environmental research in their respective fields. The focus of the European environmental Research Infrastructures is in in-situ or short-range remote sensing infrastructures. Each environmental research infrastructure (RI) has its own particular set of science questions and foci that it must solve to achieve its objectives; however every RI is also providing its data and services to the wider user communities and thus contributing to the wider, trans- and interdisciplinary science questions and grand environmental challenges. Thus, there are many issues that most of the RIs share, e.g. data collection, preservation, quality control, integration and availability, as well as providing the computational capability to researchers. ENVRI - Common operation of European Research Infrastructures - project was a collaborative action of major European Environmental RIs working towards increased cooperation and interoperability between the infrastructures (www.envri.eu). From the technological point-of-view, one of the major results is the development of common Environmental RIs Reference Model, which is a tool to effectively enhance the interoperability among RIs. In addition to common technical solutions, also cultural and human related topics need to be tackled in parallel with the technical solutions. Topics such as open access, data policy issues (licenses, citation agreements, IPR agreements), technologies for machine-machine interaction, workflows, metadata, data annotations, and the training of the data scientist and research generalist to make it all work and implemented. These three interdependent resource capitals (technological incl. ENVRI Reference Model, cultural and human capitals) will be discussed in the presentation.

  19. CINERGI: Community Inventory of EarthCube Resources for Geoscience Interoperability

    NASA Astrophysics Data System (ADS)

    Zaslavsky, Ilya; Bermudez, Luis; Grethe, Jeffrey; Gupta, Amarnath; Hsu, Leslie; Lehnert, Kerstin; Malik, Tanu; Richard, Stephen; Valentine, David; Whitenack, Thomas

    2014-05-01

    Organizing geoscience data resources to support cross-disciplinary data discovery, interpretation, analysis and integration is challenging because of different information models, semantic frameworks, metadata profiles, catalogs, and services used in different geoscience domains, not to mention different research paradigms and methodologies. The central goal of CINERGI, a new project supported by the US National Science Foundation through its EarthCube Building Blocks program, is to create a methodology and assemble a large inventory of high-quality information resources capable of supporting data discovery needs of researchers in a wide range of geoscience domains. The key characteristics of the inventory are: 1) collaboration with and integration of metadata resources from a number of large data facilities; 2) reliance on international metadata and catalog service standards; 3) assessment of resource "interoperability-readiness"; 4) ability to cross-link and navigate data resources, projects, models, researcher directories, publications, usage information, etc.; 5) efficient inclusion of "long-tail" data, which are not appearing in existing domain repositories; 6) data registration at feature level where appropriate, in addition to common dataset-level registration, and 7) integration with parallel EarthCube efforts, in particular focused on EarthCube governance, information brokering, service-oriented architecture design and management of semantic information. We discuss challenges associated with accomplishing CINERGI goals, including defining the inventory scope; managing different granularity levels of resource registration; interaction with search systems of domain repositories; explicating domain semantics; metadata brokering, harvesting and pruning; managing provenance of the harvested metadata; and cross-linking resources based on the linked open data (LOD) approaches. At the higher level of the inventory, we register domain-wide resources such as domain

  20. Semantic Fission through Dialect Fusion.

    ERIC Educational Resources Information Center

    Linn, Michael D.

    The linguistic atlas projects have provided much information on the regional distribution of pronunciation, vocabulary, and syntax and have given important evidence for a greater understanding of problems involved in semantic change, particularly in pointing out transition areas where dialects become fused. In a study supplementary to that…