Science.gov

Sample records for achieve semantic interoperability

  1. Achieving clinical statement interoperability using R-MIM and archetype-based semantic transformations.

    PubMed

    Kilic, Ozgur; Dogac, Asuman

    2009-07-01

    Effective use of electronic healthcare records (EHRs) has the potential to positively influence both the quality and the cost of health care. Consequently, sharing patient's EHRs is becoming a global priority in the healthcare information technology domain. This paper addresses the interoperability of EHR structure and content. It describes how two different EHR standards derived from the same reference information model (RIM) can be mapped to each other by using archetypes, refined message information model (R-MIM) derivations, and semantic tools. It is also demonstrated that well-defined R-MIM derivation rules help tracing the class properties back to their origins when the R-MIMs of two EHR standards are derived from the same RIM. Using well-defined rules also enable finding equivalences in the properties of the source and target EHRs. Yet an R-MIM still defines the concepts at the generic level. Archetypes (or templates), on the other hand, constrain an R-MIM to domain-specific concepts, and hence, provide finer granularity semantics. Therefore, while mapping clinical statements between EHRs, we also make use of the archetype semantics. Derivation statements are inferred from the Web Ontology Language definitions of the RIM, the R-MIMs, and the archetypes. Finally, we show how to transform Health Level Seven clinical statement instances to EHRcom clinical statement instances and vice versa by using the generated mapping definitions.

  2. MIDST: Interoperability for Semantic Annotations

    NASA Astrophysics Data System (ADS)

    Atzeni, Paolo; Del Nostro, Pierluigi; Paolozzi, Stefano

    In the last years, interoperability of ontologies and databases has received a lot of attention. However, most of the work has concentrated on specific problems (such as storing an ontology in a database or making database data available to ontologies) and referred to specific models for each of the two. Here, we propose an approach that aims at being more general and model independent. In fact, it works for different dialects for ontologies and for various data models for databases. Also, it supports translations in both directions (ontologies to databases and vice versa) and it allows for flexibility in the translations, so that customization is possible. The proposal extends recent work for schema and data translation (the MIDST project, which implements the ModelGen operator proposed in model management), which relies on a metamodel approach, where data models and variations thereof are described in a common framework and translations are built as compositions of elementary ones.

  3. Providing semantic interoperability between clinical care and clinical research domains.

    PubMed

    Laleci, Gokce Banu; Yuksel, Mustafa; Dogac, Asuman

    2013-03-01

    Improving the efficiency with which clinical research studies are conducted can lead to faster medication innovation and decreased time to market for new drugs. To increase this efficiency, the parties involved in a regulated clinical research study, namely, the sponsor, the clinical investigator and the regulatory body, each with their own software applications, need to exchange data seamlessly. However, currently, the clinical research and the clinical care domains are quite disconnected because each use different standards and terminology systems. In this article, we describe an initial implementation of the Semantic Framework developed within the scope of SALUS project to achieve interoperability between the clinical research and the clinical care domains. In our Semantic Framework, the core ontology developed for semantic mediation is based on the shared conceptual model of both of these domains provided by the BRIDG initiative. The core ontology is then aligned with the extracted semantic models of the existing clinical care and research standards as well as with the ontological representations of the terminology systems to create a model of meaning for enabling semantic mediation. Although SALUS is a research and development effort rather than a product, the current SALUS knowledge base contains around 4.7 million triples representing BRIDG DAM, HL7 CDA model, CDISC standards and several terminology ontologies. In order to keep the reasoning process within acceptable limits without sacrificing the quality of mediation, we took an engineering approach by developing a number of heuristic mechanisms. The results indicate that it is possible to build a robust and scalable semantic framework with a solid theoretical foundation for achieving interoperability between the clinical research and clinical care domains.

  4. Providing semantic interoperability between clinical care and clinical research domains.

    PubMed

    Laleci, Gokce Banu; Yuksel, Mustafa; Dogac, Asuman

    2013-03-01

    Improving the efficiency with which clinical research studies are conducted can lead to faster medication innovation and decreased time to market for new drugs. To increase this efficiency, the parties involved in a regulated clinical research study, namely, the sponsor, the clinical investigator and the regulatory body, each with their own software applications, need to exchange data seamlessly. However, currently, the clinical research and the clinical care domains are quite disconnected because each use different standards and terminology systems. In this article, we describe an initial implementation of the Semantic Framework developed within the scope of SALUS project to achieve interoperability between the clinical research and the clinical care domains. In our Semantic Framework, the core ontology developed for semantic mediation is based on the shared conceptual model of both of these domains provided by the BRIDG initiative. The core ontology is then aligned with the extracted semantic models of the existing clinical care and research standards as well as with the ontological representations of the terminology systems to create a model of meaning for enabling semantic mediation. Although SALUS is a research and development effort rather than a product, the current SALUS knowledge base contains around 4.7 million triples representing BRIDG DAM, HL7 CDA model, CDISC standards and several terminology ontologies. In order to keep the reasoning process within acceptable limits without sacrificing the quality of mediation, we took an engineering approach by developing a number of heuristic mechanisms. The results indicate that it is possible to build a robust and scalable semantic framework with a solid theoretical foundation for achieving interoperability between the clinical research and clinical care domains. PMID:23008263

  5. Achieving interoperability for accountable care.

    PubMed

    Bordenick, Jennifer Covich; Okubo, Tracy H; Kontur, Alex; Siddiqui, Nadeen

    2015-02-01

    Based on findings of a recent survey, accountable care organizations should keep eight points in mind as they seek to establish interoperability among their provider constituents: Create a shared governance structure to make IT decisions. Conduct a readiness assessment and gap analysis. Reconfigure the technology infrastructure and processes to support new value-based care delivery protocols. Consider targeting programs around high-risk groups. Develop real-time data-sharing systems. Ensure privacy and security policies and procedures are in place. Assess and address workforce issues expeditiously. Participate in broader interoperability efforts. PMID:26665540

  6. Processes for Achieving Interoperability in GEOSS

    NASA Astrophysics Data System (ADS)

    Thomas, D.; Khalsa, S. S.; Nativi, S.; Ahern, T.; Shibasaki, R.

    2007-12-01

    GEOSS, the Global Earth Observing System of Systems, is being built from existing systems and initiatives, with an emphasis on the creation of synergies among GEOSS component that provide increased benefits to society. The goal is to leverage existing programs and established standards wherever possible, and to broaden convergence of systems based on agreed interoperability arrangements. This talk will describe the specific approaches that GEOSS has proposed for achieving interoperability among its component systems and will give an overview of the GEOSS Interoperability Process Pilot Project (IP3). The IP3 was conceived as a way to exercise the process that has been defined for reaching interoperability arrangements. We describe the phases and status of the IP3, which begins with identification of the system components, and the standards, interface protocols and interoperability agreements currently in use by these systems. This information is captured in web-accessible catalogs and registries that are part of the core GEOSS architecture. Four systems/disciplines were initially identified as sources for the pilot project, covering weather and climate, seismology, biodiversity, and water cycle. This selection was based on the desire to have participation from diverse disciplines and the commitments of representatives from the disciplines to actively support the process. Systems contributed to GEOSS are built to serve particular needs, but those systems should also be designed or adapted so their inputs and outputs support interoperability with other systems. Consequently, we focus on interoperability situations that are surfaced by actual requirements to interface with other GEOSS- affiliated systems through what are termed GEOSS interoperability arrangements. Use case scenarios were developed that required the exchange of data and information between the identified systems. In designing interfaces to support interoperability among two or more component systems of

  7. Semantic Interoperability in Clinical Decision Support Systems: A Systematic Review.

    PubMed

    Marco-Ruiz, Luis; Bellika, Johan Gustav

    2015-01-01

    The interoperability of Clinical Decision Support (CDS) systems with other health information systems has become one of the main limitations to their broad adoption. Semantic interoperability must be granted in order to share CDS modules across different health information systems. Currently, numerous standards for different purposes are available to enable the interoperability of CDS systems. We performed a literature review to identify and provide an overview of the available standards that enable CDS interoperability in the areas of clinical information, decision logic, terminology, and web service interfaces. PMID:26262260

  8. Achieving Interoperability through Data Virtualization

    NASA Astrophysics Data System (ADS)

    Xing, Z.

    2015-12-01

    Data Interoperability is a challenging problem. Different approaches exist.In this presentation, we would like to share our experienceon webification science (w10n-sci), an information technology thatvirtualizes arbitrary data resources and makes them directly usablevia a simple and uniform application programmable interface.W10n-sci has been successfully applied to all major NASA scientificdisciplines and used by an increasing number of missions and projects.We will provide an overview of w10n-sci and elaborate onhow it can help data users in a data world that diversity always prevails.

  9. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services.

    PubMed

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; Md, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  10. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services

    PubMed Central

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; MD, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  11. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will

  12. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    PubMed

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  13. An adaptive semantic based mediation system for data interoperability among Health Information Systems.

    PubMed

    Khan, Wajahat Ali; Khattak, Asad Masood; Hussain, Maqbool; Amin, Muhammad Bilal; Afzal, Muhammad; Nugent, Christopher; Lee, Sungyoung

    2014-08-01

    Heterogeneity in the management of the complex medical data, obstructs the attainment of data level interoperability among Health Information Systems (HIS). This diversity is dependent on the compliance of HISs with different healthcare standards. Its solution demands a mediation system for the accurate interpretation of data in different heterogeneous formats for achieving data interoperability. We propose an adaptive AdapteR Interoperability ENgine mediation system called ARIEN, that arbitrates between HISs compliant to different healthcare standards for accurate and seamless information exchange to achieve data interoperability. ARIEN stores the semantic mapping information between different standards in the Mediation Bridge Ontology (MBO) using ontology matching techniques. These mappings are provided by our System for Parallel Heterogeneity (SPHeRe) matching system and Personalized-Detailed Clinical Model (P-DCM) approach to guarantee accuracy of mappings. The realization of the effectiveness of the mappings stored in the MBO is evaluation of the accuracy in transformation process among different standard formats. We evaluated our proposed system with the transformation process of medical records between Clinical Document Architecture (CDA) and Virtual Medical Record (vMR) standards. The transformation process achieved over 90 % of accuracy level in conversion process between CDA and vMR standards using pattern oriented approach from the MBO. The proposed mediation system improves the overall communication process between HISs. It provides an accurate and seamless medical information exchange to ensure data interoperability and timely healthcare services to patients.

  14. A secure semantic interoperability infrastructure for inter-enterprise sharing of electronic healthcare records.

    PubMed

    Boniface, Mike; Watkins, E Rowland; Saleh, Ahmed; Dogac, Asuman; Eichelberg, Marco

    2006-01-01

    Healthcare professionals need access to accurate and complete healthcare records for effective assessment, diagnosis and treatment of patients. The non-interoperability of healthcare information systems means that interenterprise access to a patient's history over many distributed encounters is difficult to achieve. The ARTEMIS project has developed a secure semantic web service infrastructure for the interoperability of healthcare information systems. Healthcare professionals share services and medical information using a web service annotation and mediation environment based on functional and clinical semantics derived from healthcare standards. Healthcare professionals discover medical information about individuals using a patient identification protocol based on pseudonymous information. The management of care pathways and access to medical information is based on a well-defined business process allowing healthcare providers to negotiate collaboration and data access agreements within the context of strict legislative frameworks.

  15. Semantic Integration for Marine Science Interoperability Using Web Technologies

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.

    2008-12-01

    The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example

  16. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  17. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. PMID:23751263

  18. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems.

  19. CityGML - Interoperable semantic 3D city models

    NASA Astrophysics Data System (ADS)

    Gröger, Gerhard; Plümer, Lutz

    2012-07-01

    CityGML is the international standard of the Open Geospatial Consortium (OGC) for the representation and exchange of 3D city models. It defines the three-dimensional geometry, topology, semantics and appearance of the most relevant topographic objects in urban or regional contexts. These definitions are provided in different, well-defined Levels-of-Detail (multiresolution model). The focus of CityGML is on the semantical aspects of 3D city models, its structures, taxonomies and aggregations, allowing users to employ virtual 3D city models for advanced analysis and visualization tasks in a variety of application domains such as urban planning, indoor/outdoor pedestrian navigation, environmental simulations, cultural heritage, or facility management. This is in contrast to purely geometrical/graphical models such as KML, VRML, or X3D, which do not provide sufficient semantics. CityGML is based on the Geography Markup Language (GML), which provides a standardized geometry model. Due to this model and its well-defined semantics and structures, CityGML facilitates interoperable data exchange in the context of geo web services and spatial data infrastructures. Since its standardization in 2008, CityGML has become used on a worldwide scale: tools from notable companies in the geospatial field provide CityGML interfaces. Many applications and projects use this standard. CityGML is also having a strong impact on science: numerous approaches use CityGML, particularly its semantics, for disaster management, emergency responses, or energy-related applications as well as for visualizations, or they contribute to CityGML, improving its consistency and validity, or use CityGML, particularly its different Levels-of-Detail, as a source or target for generalizations. This paper gives an overview of CityGML, its underlying concepts, its Levels-of-Detail, how to extend it, its applications, its likely future development, and the role it plays in scientific research. Furthermore, its

  20. Achieving Interoperability in GEOSS - How Close Are We?

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Khalsa, S. S.; Browdy, S. F.

    2010-12-01

    A primary goal of the Global Earth Observing System of System (GEOSS) is improving the interoperability between the observational, modelling, data assimilation, and prediction systems contributed by member countries. The GEOSS Common Infrastructure (GCI) comprises the elements designed to enable discovery and access to these diverse data and information sources. But to what degree can the mechanisms for accessing these data, and the data themselves, be considered interoperable? Will the separate efforts by Communities of Practice within GEO to build their own portals, such as for Energy, Biodiversity, and Air Quality, lead to fragmentation or synergy? What communication and leadership do we need with these communities to improve interoperability both within and across such communities? The Standards and Interoperability Forum (SIF) of GEO's Architecture and Data Committee has assessed progress towards achieving the goal of global interoperability and made recommendations regarding evolution of the architecture and overall data strategy to ensure fulfillment of the GEOSS vision. This presentation will highlight the results of this study, and directions for further work.

  1. Reporting Device Observations for semantic interoperability of surgical devices and clinical information systems.

    PubMed

    Andersen, Björn; Ulrich, Hannes; Rehmann, Daniel; Kock, Ann-Kristin; Wrage, Jan-Hinrich; Ingenerf, Josef

    2015-08-01

    Service-oriented medical device architectures make the progress from interdisciplinary research projects to international standardisation: A new set of IEEE 11073 proposals shall pave the way to industry acceptance. This expected availability of device observations in a standardised representation enables secondary usage if interoperability with clinical information systems can be achieved. The Device Observation Reporter (DOR) described in this work is a gateway that connects these realms. After a user chooses a selection of signals from different devices in the digital operating room, the DOR records these semantically described values for a specified duration. Upon completion, the signals descriptions and values are transformed to Health Level Seven version 2 messages and sent to a hospital information system/electronic health record system within the clinical IT network. The successful integration of device data for documentation and usage in clinical information systems can further leverage the novel device communication standard proposals. Complementing these, an Integrating the Healthcare Enterprise profile will aid commercial implementers in achieving interoperability. Their solutions could incorporate clinical knowledge to autonomously select signal combinations and generate reports of diagnostic and interventional procedures, thus saving time and effort for surgical documentation.

  2. Facilitating Semantic Interoperability Among Ocean Data Systems: ODIP-R2R Student Outcomes

    NASA Astrophysics Data System (ADS)

    Stocks, K. I.; Chen, Y.; Shepherd, A.; Chandler, C. L.; Dockery, N.; Elya, J. L.; Smith, S. R.; Ferreira, R.; Fu, L.; Arko, R. A.

    2014-12-01

    With informatics providing an increasingly important set of tools for geoscientists, it is critical to train the next generation of scientists in information and data techniques. The NSF-supported Rolling Deck to Repository (R2R) Program works with the academic fleet community to routinely document, assess, and preserve the underway sensor data from U.S. research vessels. The Ocean Data Interoperability Platform (ODIP) is an EU-US-Australian collaboration fostering interoperability among regional e-infrastructures through workshops and joint prototype development. The need to align terminology between systems is a common challenge across all of the ODIP prototypes. Five R2R students were supported to address aspects of semantic interoperability within ODIP. Developing a vocabulary matching service that links terms from different vocabularies with similar concept. The service implements Google Refine reconciliation service interface such that users can leverage Google Refine application as a friendly user interface while linking different vocabulary terms. Developing Resource Description Framework (RDF) resources that map Shipboard Automated Meteorological Oceanographic System (SAMOS) vocabularies to internationally served vocabularies. Each SAMOS vocabulary term (data parameter and quality control flag) will be described as an RDF resource page. These RDF resources allow for enhanced discoverability and retrieval of SAMOS data by enabling data searches based on parameter. Improving data retrieval and interoperability by exposing data and mapped vocabularies using Semantic Web technologies. We have collaborated with ODIP participating organizations in order to build a generalized data model that will be used to populate a SPARQL endpoint in order to provide expressive querying over our data files. Mapping local and regional vocabularies used by R2R to those used by ODIP partners. This work is described more fully in a companion poster. Making published Linked Data

  3. RuleML-Based Learning Object Interoperability on the Semantic Web

    ERIC Educational Resources Information Center

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  4. An Approach to Semantic Interoperability for Improved Capability Exchanges in Federations of Systems

    ERIC Educational Resources Information Center

    Moschoglou, Georgios

    2013-01-01

    This study seeks an affirmative answer to the question whether a knowledge-based approach to system of systems interoperation using semantic web standards and technologies can provide the centralized control of the capability for exchanging data and services lacking in a federation of systems. Given the need to collect and share real-time…

  5. Sharing meanings: developing interoperable semantic technologies to enhance reproducibility in earth and environmental science research

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.

    2015-12-01

    Earth and environmental scientists are familiar with the entities, processes, and theories germane to their field of study, and comfortable collecting and analyzing data in their area of interest. Yet, while there appears to be consistency and agreement as to the scientific "terms" used to describe features in their data and analyses, aside from a few fundamental physical characteristics—such as mass or velocity-- there can be broad tolerances, if not considerable ambiguity, in how many earth science "terms" map to the underlying "concepts" that they actually represent. This ambiguity in meanings, or "semantics", creates major problems for scientific reproducibility. It greatly impedes the ability to replicate results—by making it difficult to determine the specifics of the intended meanings of terms such as deforestation or carbon flux -- as to scope, composition, magnitude, etc. In addition, semantic ambiguity complicates assemblage of comparable data for reproducing results, due to ambiguous or idiosyncratic labels for measurements, such as percent cover of forest, where the term "forest" is undefined; or where a reported output of "total carbon-emissions" might just include CO2 emissions, but not methane emissions. In this talk, we describe how the NSF-funded DataONE repository for earth and environmental science data (http://dataone.org), is using W3C-standard languages (RDF/OWL) to build an ontology for clarifying concepts embodied in heterogeneous data and model outputs. With an initial focus on carbon cycling concepts using terrestrial biospheric model outputs and LTER productivity data, we describe how we are achieving interoperability with "semantic vocabularies" (or ontologies) from aligned earth and life science domains, including OBO-foundry ontologies such as ENVO and BCO; the ISO/OGC O&M; and the NSF Earthcube GeoLink project. Our talk will also discuss best practices that may be helpful for other groups interested in constructing their own

  6. Interoperability and different ways of knowing: How semantics can aid in cross-cultural understanding

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Parsons, M. A.; Duerr, R. E.; Fox, P. A.; Khalsa, S. S.; McCusker, J. P.; McGuinness, D. L.

    2012-12-01

    differences in its application. Furthermore, it is an analog encoding scheme whose meaning has evolved over time. By semantically modeling the egg code, its subtle variations, and how it connects to other data, we illustrate a mechanism for translating across data formats and representations. But there are limits to what semantically modeling the egg-code can achieve. The egg-code and common operational sea ice formats do not address community needs, notably the timing and processes of sea ice freeze-up and break-up which have profound impact on local hunting, shipping, oil exploration, and safety. We work with local experts from four very different Indigenous communities and scientific creators of sea ice forecasts to establish an understanding of concepts and terminology related to fall freeze-up and spring break up from the individually represented regions. This helps expand our conceptions of sea ice while also aiding in understanding across cultures and communities, and in passing knowledge to younger generations. This is an early step to expanding concepts of interoperability to very different ways of knowing to make data truly relevant and locally useful.

  7. Interoperable cross-domain semantic and geospatial framework for automatic change detection

    NASA Astrophysics Data System (ADS)

    Kuo, Chiao-Ling; Hong, Jung-Hong

    2016-01-01

    With the increasingly diverse types of geospatial data established over the last few decades, semantic interoperability in integrated applications has attracted much interest in the field of Geographic Information System (GIS). This paper proposes a new strategy and framework to process cross-domain geodata at the semantic level. This framework leverages the semantic equivalence of concepts between domains through bridge ontology and facilitates the integrated use of different domain data, which has been long considered as an essential superiority of GIS, but is impeded by the lack of understanding about the semantics implicitly hidden in the data. We choose the task of change detection to demonstrate how the introduction of ontology concept can effectively make the integration possible. We analyze the common properties of geodata and change detection factors, then construct rules and summarize possible change scenario for making final decisions. The use of topographic map data to detect changes in land use shows promising success, as far as the improvement of efficiency and level of automation is concerned. We believe the ontology-oriented approach will enable a new way for data integration across different domains from the perspective of semantic interoperability, and even open a new dimensionality for the future GIS.

  8. Using software interoperability to achieve a virtual design environment

    NASA Astrophysics Data System (ADS)

    Gregory, G. Groot; Koshel, R. John

    2005-09-01

    A variety of simulation tools, including optical design and analysis, have benefited by many years of evolution in software functionality and computing power, thus making the notion of virtual design environments a reality. To simulate the optical characteristics of a system, one needs to include optical performance, mechanical design and manufacturing aspects simultaneously. To date, no single software program offers a universal solution. One approach to achieve an integrated environment is to select tools that offer a high degree of interoperability. This allows the selection of the best tools for each aspect of the design working in concert to solve the problem. This paper discusses the issues of how to assemble a design environment and provides an example of a combination of tools for illumination design. We begin by offering a broad definition of interoperability from an optical analysis perspective. This definition includes aspects of file interchange formats, software communications protocols and customized applications. One example solution is proposed by combining SolidWorks1 for computer-aided design (CAD), TracePro2 for optical analysis and MATLAB3 as the mathematical engine for tolerance analysis. The resulting virtual tool will be applied to a lightpipe design task to illustrate how such a system can be used.

  9. Interoperability Between Coastal Web Atlases Using Semantic Mediation: A Case Study of the International Coastal Atlas Network (ICAN)

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Lassoued, Y.; Dwyer, N.; Haddad, T.; Bermudez, L. E.; Dunne, D.

    2009-12-01

    Coastal mapping plays an important role in informing marine spatial planning, resource management, maritime safety, hazard assessment and even national sovereignty. As such, there is now a plethora of data/metadata catalogs, pre-made maps, tabular and text information on resource availability and exploitation, and decision-making tools. A recent trend has been to encapsulate these in a special class of web-enabled geographic information systems called a coastal web atlas (CWA). While multiple benefits are derived from tailor-made atlases, there is great value added from the integration of disparate CWAs. CWAs linked to one another can query more successfully to optimize planning and decision-making. If a dataset is missing in one atlas, it may be immediately located in another. Similar datasets in two atlases may be combined to enhance study in either region. *But how best to achieve semantic interoperability to mitigate vague data queries, concepts or natural language semantics when retrieving and integrating data and information?* We report on the development of a new prototype seeking to interoperate between two initial CWAs: the Marine Irish Digital Atlas (MIDA) and the Oregon Coastal Atlas (OCA). These two mature atlases are used as a testbed for more regional connections, with the intent for the OCA to use lessons learned to develop a regional network of CWAs along the west coast, and for MIDA to do the same in building and strengthening atlas networks with the UK, Belgium, and other parts of Europe. Our prototype uses semantic interoperability via services harmonization and ontology mediation, allowing local atlases to use their own data structures, and vocabularies (ontologies). We use standard technologies such as OGC Web Map Services (WMS) for delivering maps, and OGC Catalogue Service for the Web (CSW) for delivering and querying ISO-19139 metadata. The metadata records of a given CWA use a given ontology of terms called local ontology. Human or machine

  10. Does achievement motivation mediate the semantic achievement priming effect?

    PubMed

    Engeser, Stefan; Baumann, Nicola

    2014-10-01

    The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies. PMID:24820250

  11. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    PubMed

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  12. Cohort Selection and Management Application Leveraging Standards-based Semantic Interoperability and a Groovy DSL

    PubMed Central

    Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril

    2016-01-01

    This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments. PMID:27570644

  13. Cohort Selection and Management Application Leveraging Standards-based Semantic Interoperability and a Groovy DSL.

    PubMed

    Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril

    2016-01-01

    This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments.

  14. Cohort Selection and Management Application Leveraging Standards-based Semantic Interoperability and a Groovy DSL.

    PubMed

    Bucur, Anca; van Leeuwen, Jasper; Chen, Njin-Zu; Claerhout, Brecht; de Schepper, Kristof; Perez-Rey, David; Paraiso-Medina, Sergio; Alonso-Calvo, Raul; Mehta, Keyur; Krykwinski, Cyril

    2016-01-01

    This paper describes a new Cohort Selection application implemented to support streamlining the definition phase of multi-centric clinical research in oncology. Our approach aims at both ease of use and precision in defining the selection filters expressing the characteristics of the desired population. The application leverages our standards-based Semantic Interoperability Solution and a Groovy DSL to provide high expressiveness in the definition of filters and flexibility in their composition into complex selection graphs including splits and merges. Widely-adopted ontologies such as SNOMED-CT are used to represent the semantics of the data and to express concepts in the application filters, facilitating data sharing and collaboration on joint research questions in large communities of clinical users. The application supports patient data exploration and efficient collaboration in multi-site, heterogeneous and distributed data environments. PMID:27570644

  15. Case Study for Integration of an Oncology Clinical Site in a Semantic Interoperability Solution based on HL7 v3 and SNOMED-CT: Data Transformation Needs.

    PubMed

    Ibrahim, Ahmed; Bucur, Anca; Perez-Rey, David; Alonso, Enrique; de Hoog, Matthy; Dekker, Andre; Marshall, M Scott

    2015-01-01

    This paper describes the data transformation pipeline defined to support the integration of a new clinical site in a standards-based semantic interoperability environment. The available datasets combined structured and free-text patient data in Dutch, collected in the context of radiation therapy in several cancer types. Our approach aims at both efficiency and data quality. We combine custom-developed scripts, standard tools and manual validation by clinical and knowledge experts. We identified key challenges emerging from the several sources of heterogeneity in our case study (systems, language, data structure, clinical domain) and implemented solutions that we will further generalize for the integration of new sites. We conclude that the required effort for data transformation is manageable which supports the feasibility of our semantic interoperability solution. The achieved semantic interoperability will be leveraged for the deployment and evaluation at the clinical site of applications enabling secondary use of care data for research. This work has been funded by the European Commission through the INTEGRATE (FP7-ICT-2009-6-270253) and EURECA (FP7-ICT-2011-288048) projects.

  16. Interoperability in Personalized Adaptive Learning

    ERIC Educational Resources Information Center

    Aroyo, Lora; Dolog, Peter; Houben, Geert-Jan; Kravcik, Milos; Naeve, Ambjorn; Nilsson, Mikael; Wild, Fridolin

    2006-01-01

    Personalized adaptive learning requires semantic-based and context-aware systems to manage the Web knowledge efficiently as well as to achieve semantic interoperability between heterogeneous information resources and services. The technological and conceptual differences can be bridged either by means of standards or via approaches based on the…

  17. The role of ontologies for sustainable, semantically interoperable and trustworthy EHR solutions.

    PubMed

    Blobel, Bernd; Kalra, Dipak; Koehn, Marc; Lunn, Ken; Pharow, Peter; Ruotsalainen, Pekka; Schulz, Stefan; Smith, Barry

    2009-01-01

    As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. The workshop is a joint activity of the EFMI Working Groups "Electronic Health Records" and "Security, Safety and Ethics".

  18. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities.

    PubMed

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-06-29

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds.

  19. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities

    PubMed Central

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-01-01

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695

  20. A Proof-of-Concept for Semantically Interoperable Federation of IoT Experimentation Facilities.

    PubMed

    Lanza, Jorge; Sanchez, Luis; Gomez, David; Elsaleh, Tarek; Steinke, Ronald; Cirillo, Flavio

    2016-01-01

    The Internet-of-Things (IoT) is unanimously identified as one of the main pillars of future smart scenarios. The potential of IoT technologies and deployments has been already demonstrated in a number of different application areas, including transport, energy, safety and healthcare. However, despite the growing number of IoT deployments, the majority of IoT applications tend to be self-contained, thereby forming application silos. A lightweight data centric integration and combination of these silos presents several challenges that still need to be addressed. Indeed, the ability to combine and synthesize data streams and services from diverse IoT platforms and testbeds, holds the promise to increase the potentiality of smart applications in terms of size, scope and targeted business context. In this article, a proof-of-concept implementation that federates two different IoT experimentation facilities by means of semantic-based technologies will be described. The specification and design of the implemented system and information models will be described together with the practical details of the developments carried out and its integration with the existing IoT platforms supporting the aforementioned testbeds. Overall, the system described in this paper demonstrates that it is possible to open new horizons in the development of IoT applications and experiments at a global scale, that transcend the (silo) boundaries of individual deployments, based on the semantic interconnection and interoperability of diverse IoT platforms and testbeds. PMID:27367695

  1. A step-by-step methodology for enterprise interoperability projects

    NASA Astrophysics Data System (ADS)

    Chalmeta, Ricardo; Pazos, Verónica

    2015-05-01

    Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.

  2. Achieving incremental semantic interpretation through contextual representation.

    PubMed

    Sedivy, J C; Tanenhaus, M K; Chambers, C G; Carlson, G N

    1999-06-22

    While much work has been done investigating the role of context in the incremental processing of syntactic indeterminacies, relatively little is known about online semantic interpretation. The experiments in this article made use of the eye-tracking paradigm with spoken language and visual contexts in order to examine how, and when listeners make use of contextually-defined contrast in interpreting simple prenominal adjectives. Experiment 1 focused on intersective adjectives. Experiment 1A provided further evidence that intersective adjectives are processed incrementally. Experiment 1B compared response times to follow instructions such as 'Pick up the blue comb' under conditions where there were two blue objects (e.g. a blue pen and a blue comb), but only one of these objects had a contrasting member in the display. Responses were faster to objects with a contrasting member, establishing that the listeners initially assume a contrastive interpretation for intersective adjectives. Experiments 2 and 3 focused on vague scalar adjectives examining the time course with which listeners establish contrast for scalar adjectives such as tall using information provided by the head noun (e.g. glass) and information provided by the visual context. Use of head-based information was examined by manipulating the typicality of the target object (e.g. whether it was a good or poor example of a tall glass. Use of context-dependent contrast was examined by either having only a single glass in the display (the no contrast condition) or a contrasting object (e.g. a smaller glass). The pattern of results indicated that listeners interpreted the scalar adjective incrementally taking into account context-specific contrast prior to encountering the head. Moreover, the presence of a contrasting object, sharply reduced, and in some conditions completely eliminated, typicality effects. The results suggest a language processing system in which semantic interpretation, as well as syntactic

  3. Semantic Interoperability for Computational Mineralogy: Experiences of the eMinerals Consortium

    NASA Astrophysics Data System (ADS)

    Walker, A. M.; White, T. O.; Dove, M. T.; Bruin, R. P.; Couch, P. A.; Tyer, R. P.

    2006-12-01

    The use of atomic scale computer simulation of minerals to obtain information for geophysics and environmental science has grown enormously over the past couple of decades. It is now routine to probe mineral behavior in the Earth's deep interior and in the surface environment by borrowing methods and simulation codes from computational chemistry and physics. It is becoming increasingly important to use methods embodied in more than one of these codes to solve any single scientific problem. However, scientific codes are rarely designed for easy interoperability and data exchange; data formats are often code-specific, poorly documented and fragile, liable to frequent change between software versions, and even compiler versions. This means that the scientist's simple desire to use the methodological approaches offered by multiple codes is frustrated, and even the sharing of data between collaborators becomes fraught with difficulties. The eMinerals consortium was formed in the early stages of the UK eScience program with the aim of developing the tools needed to apply atomic scale simulation to environmental problems in a grid-enabled world, and to harness the computational power offered by grid technologies to address some outstanding mineralogical problems. One example of the kind of problem we can tackle is the origin of the compressibility anomaly in silica glass. By passing data directly between simulation and analysis tools we were able to probe this effect in more detail than has previously been possible and have shown how the anomaly is related to the details of the amorphous structure. In order to approach this kind of problem we have constructed a mini-grid, a small scale and extensible combined compute- and data-grid that allows the execution of many calculations in parallel, and the transparent storage of semantically-rich marked-up result data. Importantly, we automatically capture multiple kinds of metadata and key results from each calculation. We

  4. An integrated framework to achieve interoperability in person-centric health management.

    PubMed

    Vergari, Fabio; Salmon Cinotti, Tullio; D'Elia, Alfredo; Roffia, Luca; Zamagni, Guido; Lamberti, Claudio

    2011-01-01

    The need for high-quality out-of-hospital healthcare is a known socioeconomic problem. Exploiting ICT's evolution, ad-hoc telemedicine solutions have been proposed in the past. Integrating such ad-hoc solutions in order to cost-effectively support the entire healthcare cycle is still a research challenge. In order to handle the heterogeneity of relevant information and to overcome the fragmentation of out-of-hospital instrumentation in person-centric healthcare systems, a shared and open source interoperability component can be adopted, which is ontology driven and based on the semantic web data model. The feasibility and the advantages of the proposed approach are demonstrated by presenting the use case of real-time monitoring of patients' health and their environmental context.

  5. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran.

    PubMed

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information.

  6. A Service Oriented Architecture Approach to Achieve Interoperability between Immunization Information Systems in Iran

    PubMed Central

    Hosseini, Masoud; Ahmadi, Maryam; Dixon, Brian E.

    2014-01-01

    Clinical decision support (CDS) systems can support vaccine forecasting and immunization reminders; however, immunization decision-making requires data from fragmented, independent systems. Interoperability and accurate data exchange between immunization information systems (IIS) is an essential factor to utilize Immunization CDS systems. Service oriented architecture (SOA) and Health Level 7 (HL7) are dominant standards for web-based exchange of clinical information. We implemented a system based on SOA and HL7 v3 to support immunization CDS in Iran. We evaluated system performance by exchanging 1500 immunization records for roughly 400 infants between two IISs. System turnaround time is less than a minute for synchronous operation calls and the retrieved immunization history of infants were always identical in different systems. CDS generated reports were accordant to immunization guidelines and the calculations for next visit times were accurate. Interoperability is rare or nonexistent between IIS. Since inter-state data exchange is rare in United States, this approach could be a good prototype to achieve interoperability of immunization information. PMID:25954452

  7. Community-Driven Initiatives to Achieve Interoperability for Ecological and Environmental Data

    NASA Astrophysics Data System (ADS)

    Madin, J.; Bowers, S.; Jones, M.; Schildhauer, M.

    2007-12-01

    interoperability by describing the semantics of data at the level of observation and measurement (rather than the traditional focus at the level of the data set) and will define the necessary specifications and technologies to facilitate semantic interpretation and integration of observational data for the environmental sciences. As such, this initiative will focus on unifying the various existing approaches for representing and describing observation data (e.g., SEEK's Observation Ontology, CUAHSI's Observation Data Model, NatureServe's Observation Data Standard, to name a few). Products of this initiative will be compatible with existing standards and build upon recent advances in knowledge representation (e.g., W3C's recommended Web Ontology Language, OWL) that have demonstrated practical utility in enhancing scientific communication and data interoperability in other communities (e.g., the genomics community). A community-sanctioned, extensible, and unified model for observational data will support metadata standards such as EML while reducing the "babel" of scientific dialects that currently impede effective data integration, which will in turn provide a strong foundation for enabling cross-disciplinary synthetic research in the ecological and environmental sciences.

  8. The Semantic Management of Environmental Resources within the Interoperable Context of the EuroGEOSS: Alignment of GEMET and the GEOSS SBAs

    NASA Astrophysics Data System (ADS)

    Cialone, Claudia; Stock, Kristin

    2010-05-01

    EuroGEOSS is a European Commission funded project. It aims at improving a scientific understanding of the complex mechanisms which drive changes affecting our planet, identifying and establishing interoperable arrangements between environmental information systems. These systems would be sustained and operated by organizations with a clear mandate and resources and rendered available following the specifications of already existent frameworks such as GEOSS (the Global Earth Observation System of systems)1 and INSPIRE (the Infrastructure for Spatial Information in the European Community)2. The EuroGEOSS project's infrastructure focuses on three thematic areas: forestry, drought and biodiversity. One of the important activities in the project is the retrieval, parsing and harmonization of the large amount of heterogeneous environmental data available at local, regional and global levels between these strategic areas. The challenge is to render it semantically and technically interoperable in a simple way. An initial step in achieving this semantic and technical interoperability involves the selection of appropriate classification schemes (for example, thesauri, ontologies and controlled vocabularies) to describe the resources in the EuroGEOSS framework. These classifications become a crucial part of the interoperable framework scaffolding because they allow data providers to describe their resources and thus support resource discovery, execution and orchestration of varying levels of complexity. However, at present, given the diverse range of environmental thesauri, controlled vocabularies and ontologies and the large number of resources provided by project participants, the selection of appropriate classification schemes involves a number of considerations. First of all, there is the semantic difficulty of selecting classification schemes that contain concepts that are relevant to each thematic area. Secondly, EuroGEOSS is intended to accommodate a number of

  9. Using architectures for semantic interoperability to create journal clubs for emergency response

    SciTech Connect

    Powell, James E; Collins, Linn M; Martinez, Mark L B

    2009-01-01

    In certain types of 'slow burn' emergencies, careful accumulation and evaluation of information can offer a crucial advantage. The SARS outbreak in the first decade of the 21st century was such an event, and ad hoc journal clubs played a critical role in assisting scientific and technical responders in identifying and developing various strategies for halting what could have become a dangerous pandemic. This research-in-progress paper describes a process for leveraging emerging semantic web and digital library architectures and standards to (1) create a focused collection of bibliographic metadata, (2) extract semantic information, (3) convert it to the Resource Description Framework /Extensible Markup Language (RDF/XML), and (4) integrate it so that scientific and technical responders can share and explore critical information in the collections.

  10. An HL7-CDA wrapper for facilitating semantic interoperability to rule-based Clinical Decision Support Systems.

    PubMed

    Sáez, Carlos; Bresó, Adrián; Vicente, Javier; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-03-01

    The success of Clinical Decision Support Systems (CDSS) greatly depends on its capability of being integrated in Health Information Systems (HIS). Several proposals have been published up to date to permit CDSS gathering patient data from HIS. Some base the CDSS data input on the HL7 reference model, however, they are tailored to specific CDSS or clinical guidelines technologies, or do not focus on standardizing the CDSS resultant knowledge. We propose a solution for facilitating semantic interoperability to rule-based CDSS focusing on standardized input and output documents conforming an HL7-CDA wrapper. We define the HL7-CDA restrictions in a HL7-CDA implementation guide. Patient data and rule inference results are mapped respectively to and from the CDSS by means of a binding method based on an XML binding file. As an independent clinical document, the results of a CDSS can present clinical and legal validity. The proposed solution is being applied in a CDSS for providing patient-specific recommendations for the care management of outpatients with diabetes mellitus. PMID:23199936

  11. Achieving control and interoperability through unified model-based systems and software engineering

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Ingham, Michel; Dvorak, Daniel

    2005-01-01

    Control and interoperation of complex systems is one of the most difficult challenges facing NASA's Exploration Systems Mission Directorate. An integrated but diverse array of vehicles, habitats, and supporting facilities, evolving over the long course of the enterprise, must perform ever more complex tasks while moving steadily away from the sphere of ground support and intervention.

  12. Achieving Interoperability Through Base Registries for Governmental Services and Document Management

    NASA Astrophysics Data System (ADS)

    Charalabidis, Yannis; Lampathaki, Fenareti; Askounis, Dimitris

    As digital infrastructures increase their presence worldwide, following the efforts of governments to provide citizens and businesses with high-quality one-stop services, there is a growing need for the systematic management of those newly defined and constantly transforming processes and electronic documents. E-government Interoperability Frameworks usually cater to the technical standards of e-government systems interconnection, but do not address service composition and use by citizens, businesses, or other administrations.

  13. Commonality based interoperability

    NASA Astrophysics Data System (ADS)

    Moulton, Christine L.; Hepp, Jared J.; Harrell, John

    2016-05-01

    What interoperability is and why the Army wants it between systems is easily understood. Enabling multiple systems to work together and share data across boundaries in a co-operative manner will benefit the warfighter by allowing for easy access to previously hard-to-reach capabilities. How to achieve interoperability is not as easy to understand due to the numerous different approaches that accomplish the goal. Commonality Based Interoperability (CBI) helps establish how to achieve the goal by extending the existing interoperability definition. CBI is not an implementation, nor is it an architecture; it is a definition of interoperability with a foundation of establishing commonality between systems.

  14. Achieving mask order processing automation, interoperability and standardization based on P10

    NASA Astrophysics Data System (ADS)

    Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.

    2007-02-01

    Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.

  15. A cloud-based approach for interoperable electronic health records (EHRs).

    PubMed

    Bahga, Arshdeep; Madisetti, Vijay K

    2013-09-01

    We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security. PMID:25055368

  16. A cloud-based approach for interoperable electronic health records (EHRs).

    PubMed

    Bahga, Arshdeep; Madisetti, Vijay K

    2013-09-01

    We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.

  17. Towards a Common Platform to Support Business Processes, Services and Semantics

    NASA Astrophysics Data System (ADS)

    Piprani, Baba

    The search for the Holy Grail in achieving interoperability of business processes, services and semantics continues with every new type or search for the Silver Bullet. Most approaches towards interoperability either are focusing narrowly on the simplistic notion using technology supporting a cowboy-style development without much regard to metadata or semantics. At the same time, the distortions on semantics created by many of current modeling paradigms and approaches - including the disharmony created by multiplicity of parallel approaches to standardization - are not helping us resolve the real issues facing knowledge and semantics management. This paper will address some of the issues facing us, like: What have we achieved? Where did we go wrong? What are we doing right? - providing an ipso-facto encapsulated candid snapshot on an approach to harmonizing our approach to interoperability, and propose a common platform to support Business Processes, Services and Semantics.

  18. Data interchange standards in healthcare IT--computable semantic interoperability: now possible but still difficult, do we really need a better mousetrap?

    PubMed

    Mead, Charles N

    2006-01-01

    The following article on HL7 Version 3 will give readers a glimpse into the significant differences between "what came before"--that is, HL7 Version 2.x--and "what today and the future will bring," which is the HL7 Version 3 family of data interchange specifications. The difference between V2.x and V3 is significant, and it exists because the various stakeholders in the HL7 development process believe that the increased depth, breadth, and, to some degree, complexity that characterize V3 are necessary to solve many of today's and tomorrow's increasingly wide, deep and complex healthcare information data interchange requirements. Like many healthcare or technology discussions, this discussion has its own vocabulary of somewhat obscure, but not difficult, terms. This article will define the minimum set that is necessary for readers to appreciate the relevance and capabilities of HL7 Version 3, including how it is different than HL7 Version 2. After that, there will be a brief overview of the primary motivations for HL7 Version 3 in the presence of the unequivocal success of Version 2. In this context, the article will give readers an overview of one of the prime constructs of Version 3, the Reference Information Model (RIM). There are 'four pillars that are necessary but not sufficient to obtain computable semantic interoperability." These four pillars--a cross-domain information model; a robust data type specification; a methodology for separating domain-specific terms from, as well as binding them to, the common model; and a top-down interchange specification methodology and tools for using 1, 2, 3 and defining Version 3 specification--collectively comprise the "HL7 Version 3 Toolkit." Further, this article will present a list of questions and answers to help readers assess the scope and complexity of the problems facing healthcare IT today, and which will further enlighten readers on the "reality" of HL7 Version 3. The article will conclude with a "pseudo

  19. Data interchange standards in healthcare IT--computable semantic interoperability: now possible but still difficult, do we really need a better mousetrap?

    PubMed

    Mead, Charles N

    2006-01-01

    The following article on HL7 Version 3 will give readers a glimpse into the significant differences between "what came before"--that is, HL7 Version 2.x--and "what today and the future will bring," which is the HL7 Version 3 family of data interchange specifications. The difference between V2.x and V3 is significant, and it exists because the various stakeholders in the HL7 development process believe that the increased depth, breadth, and, to some degree, complexity that characterize V3 are necessary to solve many of today's and tomorrow's increasingly wide, deep and complex healthcare information data interchange requirements. Like many healthcare or technology discussions, this discussion has its own vocabulary of somewhat obscure, but not difficult, terms. This article will define the minimum set that is necessary for readers to appreciate the relevance and capabilities of HL7 Version 3, including how it is different than HL7 Version 2. After that, there will be a brief overview of the primary motivations for HL7 Version 3 in the presence of the unequivocal success of Version 2. In this context, the article will give readers an overview of one of the prime constructs of Version 3, the Reference Information Model (RIM). There are 'four pillars that are necessary but not sufficient to obtain computable semantic interoperability." These four pillars--a cross-domain information model; a robust data type specification; a methodology for separating domain-specific terms from, as well as binding them to, the common model; and a top-down interchange specification methodology and tools for using 1, 2, 3 and defining Version 3 specification--collectively comprise the "HL7 Version 3 Toolkit." Further, this article will present a list of questions and answers to help readers assess the scope and complexity of the problems facing healthcare IT today, and which will further enlighten readers on the "reality" of HL7 Version 3. The article will conclude with a "pseudo

  20. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities.

    PubMed

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-06-24

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a "system of systems" could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method.

  1. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities

    PubMed Central

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-01-01

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a “system of systems” could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965

  2. Semantic Registration and Discovery System of Subsystems and Services within an Interoperable Coordination Platform in Smart Cities.

    PubMed

    Rubio, Gregorio; Martínez, José Fernán; Gómez, David; Li, Xin

    2016-01-01

    Smart subsystems like traffic, Smart Homes, the Smart Grid, outdoor lighting, etc. are built in many urban areas, each with a set of services that are offered to citizens. These subsystems are managed by self-contained embedded systems. However, coordination and cooperation between them are scarce. An integration of these systems which truly represents a "system of systems" could introduce more benefits, such as allowing the development of new applications and collective optimization. The integration should allow maximum reusability of available services provided by entities (e.g., sensors or Wireless Sensor Networks). Thus, it is of major importance to facilitate the discovery and registration of available services and subsystems in an integrated way. Therefore, an ontology-based and automatic system for subsystem and service registration and discovery is presented. Using this proposed system, heterogeneous subsystems and services could be registered and discovered in a dynamic manner with additional semantic annotations. In this way, users are able to build customized applications across different subsystems by using available services. The proposed system has been fully implemented and a case study is presented to show the usefulness of the proposed method. PMID:27347965

  3. Improving Z39.50 Interoperability: Z39.50 Profiles and Testbeds for Library Applications.

    ERIC Educational Resources Information Center

    Moen, William E.

    This paper discusses two efforts that offer solution paths to interoperability of the Z39.50 standard protocol for information retrieval. The first section provides background on interoperability and Z39.50, identifying the categories of syntactic protocol interoperability, functional protocol interoperability, and semantic interoperability. The…

  4. Interoperability and information discovery

    USGS Publications Warehouse

    Christian, E.

    2001-01-01

    In the context of information systems, there is interoperability when the distinctions between separate information systems are not a barrier to accomplishing a task that spans those systems. Interoperability so defined implies that there are commonalities among the systems involved and that one can exploit such commonalities to achieve interoperability. The challenge of a particular interoperability task is to identify relevant commonalities among the systems involved and to devise mechanisms that exploit those commonalities. The present paper focuses on the particular interoperability task of information discovery. The Global Information Locator Service (GILS) is described as a policy, standards, and technology framework for addressing interoperable information discovery on a global and long-term basis. While there are many mechanisms for people to discover and use all manner of data and information resources, GILS initiatives exploit certain key commonalities that seem to be sufficient to realize useful information discovery interoperability at a global, long-term scale. This paper describes ten of the specific commonalities that are key to GILS initiatives. It presents some of the practical implications for organizations in various roles: content provider, system engineer, intermediary, and searcher. The paper also provides examples of interoperable information discovery as deployed using GILS in four types of information communities: bibliographic, geographic, environmental, and government.

  5. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  6. Large scale healthcare data integration and analysis using the semantic web.

    PubMed

    Timm, John; Renly, Sondra; Farkash, Ariel

    2011-01-01

    Healthcare data interoperability can only be achieved when the semantics of the content is well defined and consistently implemented across heterogeneous data sources. Achieving these objectives of interoperability requires the collaboration of experts from several domains. This paper describes tooling that integrates Semantic Web technologies with common tools to facilitate cross-domain collaborative development for the purposes of data interoperability. Our approach is divided into stages of data harmonization and representation, model transformation, and instance generation. We applied our approach on Hypergenes, an EU funded project, where we use our method to the Essential Hypertension disease model using a CDA template. Our domain expert partners include clinical providers, clinical domain researchers, healthcare information technology experts, and a variety of clinical data consumers. We show that bringing Semantic Web technologies into the healthcare interoperability toolkit increases opportunities for beneficial collaboration thus improving patient care and clinical research outcomes.

  7. Software interoperability for energy simulation

    SciTech Connect

    Hitchcock, Robert J.

    2002-07-31

    This paper provides an overview of software interoperability as it relates to the energy simulation of buildings. The paper begins with a discussion of the difficulties in using sophisticated analysis tools like energy simulation at various stages in the building life cycle, and the potential for interoperability to help overcome these difficulties. An overview of the Industry Foundation Classes (IFC), a common data model for supporting interoperability under continuing development by the International Alliance for Interoperability (IAI) is then given. The process of creating interoperable software is described next, followed by specific details for energy simulation tools. The paper closes with the current status of, and future plans for, the ongoing efforts to achieve software interoperability.

  8. Interoperation of heterogeneous CAD tools in Ptolemy II

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.

    1999-03-01

    Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.

  9. Lemnos Interoperable Security Program

    SciTech Connect

    Stewart, John; Halbgewachs, Ron; Chavez, Adrian; Smith, Rhett; Teumim, David

    2012-01-31

    The manner in which the control systems are being designed and operated in the energy sector is undergoing some of the most significant changes in history due to the evolution of technology and the increasing number of interconnections to other system. With these changes however come two significant challenges that the energy sector must face; 1) Cyber security is more important than ever before, and 2) Cyber security is more complicated than ever before. A key requirement in helping utilities and vendors alike in meeting these challenges is interoperability. While interoperability has been present in much of the discussions relating to technology utilized within the energy sector and especially the Smart Grid, it has been absent in the context of cyber security. The Lemnos project addresses these challenges by focusing on the interoperability of devices utilized within utility control systems which support critical cyber security functions. In theory, interoperability is possible with many of the cyber security solutions available to utilities today. The reality is that the effort required to achieve cyber security interoperability is often a barrier for utilities. For example, consider IPSec, a widely-used Internet Protocol to define Virtual Private Networks, or tunnels , to communicate securely through untrusted public and private networks. The IPSec protocol suite has a significant number of configuration options and encryption parameters to choose from, which must be agreed upon and adopted by both parties establishing the tunnel. The exercise in getting software or devices from different vendors to interoperate is labor intensive and requires a significant amount of security expertise by the end user. Scale this effort to a significant number of devices operating over a large geographical area and the challenge becomes so overwhelming that it often leads utilities to pursue solutions from a single vendor. These single vendor solutions may inadvertently lock

  10. User-centered semantic harmonization: a case study.

    PubMed

    Weng, Chunhua; Gennari, John H; Fridsma, Douglas B

    2007-06-01

    Semantic interoperability is one of the great challenges in biomedical informatics. Methods such as ontology alignment or use of metadata neither scale nor fundamentally alleviate semantic heterogeneity among information sources. In the context of the Cancer Biomedical Informatics Grid program, the Biomedical Research Integrated Domain Group (BRIDG) has been making an ambitious effort to harmonize existing information models for clinical research from a variety of sources and modeling agreed-upon semantics shared by the technical harmonization committee and the developers of these models. This paper provides some observations on this user-centered semantic harmonization effort and its inherent technical and social challenges. The authors also compare BRIDG with related efforts to achieve semantic interoperability in healthcare, including UMLS, InterMed, the Semantic Web, and the Ontology for Biomedical Investigations initiative. The BRIDG project demonstrates the feasibility of user-centered collaborative domain modeling as an approach to semantic harmonization, but also highlights a number of technology gaps in support of collaborative semantic harmonization that remain to be filled.

  11. Interoperable Documentation

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2011-12-01

    Documentation provides the context that adds understanding and knowledge to data. The ISO Standards for documenting data (19115, 19115-2), and services (19119) extend the range of standard documentation considerably beyond previously available approaches. They include increased utilization of technologies like UML, XML and linking and content areas like data quality and processing history. These extensions can build an emerging foundation of data interoperability into an infrastructure for interoperable understanding. This process will involve active collaboration between many environmental data providers and archives all over the world that are currently in the process of adopting and understanding how to effectively use the ISO Standards. I will describe ISO capabilities in the context of parallels between metadata tools and data interoperability approaches currently used by scientists and decision-makers. I will demonstrate how directories shared over the web, transport standards, and community conventions build the foundation for documentation access and data understanding. I will also demonstrate crosswalks and connections between ISO, THREDDS, and NetCDF documentation and some ideas and approaches to improving documentation across the entire spectrum of environmental data and products.

  12. Improving Interoperability in ePrescribing

    PubMed Central

    Åstrand, Bengt; Petersson, Göran

    2012-01-01

    Background The increased application of eServices in health care, in general, and ePrescribing (electronic prescribing) in particular, have brought quality and interoperability to the forefront. The application of standards has been put forward as one important factor in improving interoperability. However, less focus has been placed on other factors, such as stakeholders’ involvement and the measurement of interoperability. An information system (IS) can be regarded to comprise an instrument for technology-mediated work communication. In this study, interoperability refers to the interoperation in the ePrescribing process, involving people, systems, procedures and organizations. We have focused on the quality of the ePrescription message as one component of the interoperation in the ePrescribing process. Objective The objective was to analyze how combined efforts in improving interoperability with the introduction of the new national ePrescription format (NEF) have impacted interoperability in the ePrescribing process in Sweden, with the focus on the quality of the ePrescription message. Methods Consecutive sampling of electronic prescriptions in Sweden before and after the introduction of NEF was undertaken in April 2008 (pre-NEF) and April 2009 (post-NEF). Interoperability problems were identified and classified based on message format specifications and prescription rules. Results The introduction of NEF improved the interoperability of ePrescriptions substantially. In the pre-NEF sample, a total of 98.6% of the prescriptions had errors. In the post-NEF sample, only 0.9% of the prescriptions had errors. The mean number of errors was fewer for the erroneous prescriptions: 4.8 in pre-NEF compared to 1.0 in post-NEF. Conclusions We conclude that a systematic comprehensive work on interoperability, covering technical, semantical, professional, judicial and process aspects, involving the stakeholders, resulted in an improved interoperability of e

  13. Toward the interoperability of HL7 v3 and SNOMED CT: a case study modeling mobile clinical treatment.

    PubMed

    Ryan, Amanda; Eklund, Peter; Esler, Brett

    2007-01-01

    Semantic interoperability in healthcare can be achieved by a tighter coupling of terminology and HL7 message models. In this paper, we highlight the difficulty of achieving this goal, but show how it can become attainable by basing HL7 message models on SNOMED CT concepts and relationships. We then demonstrate how this methodology has been applied to a set of clinical observations for use in the ePOC project, and discuss our findings.

  14. HeartDrive: A Broader Concept of Interoperability to Implement Care Processes for Heart Failure.

    PubMed

    Lettere, M; Guerri, D; La Manna, S; Groccia, M C; Lofaro, D; Conforti, D

    2016-01-01

    This paper originates from the HeartDrive project, a platform of services for a more effective, efficient and integrated management of heart failure and comorbidities. HeartDrive establishes a cooperative approach based on the concepts of continuity of care and extreme, patient oriented, customization of diagnostic, therapeutic and follow-up procedures. Definition and development of evidence based processes, migration from parceled and episode based healthcare provisioning to a workflow oriented model and increased awareness and responsibility of citizens towards their own health and wellness are key objectives of HeartDrive. In two scenarios for rehabilitation and home monitoring we show how the results are achieved by providing a solution that highlights a broader concept of cooperation that goes beyond technical interoperability towards semantic interoperability explicitly sharing process definitions, decision support strategies and information semantics. PMID:27225572

  15. A Semantic Web Service and Simulation Framework to Intelligent Distributed Manufacturing

    SciTech Connect

    Son, Young Jun; Kulvatunyou, Boonserm; Cho, Hyunbo; Feng, Shaw

    2005-11-01

    To cope with today's fluctuating markets, a virtual enterprise (VE) concept can be employed to achieve the cooperation among independently operating enterprises. The success of VE depends on reliable interoperation among trading partners. This paper proposes a framework based on semantic web of manufacturing and simulation services to enable business and engineering collaborations between VE partners, particularly a design house and manufacturing suppliers.

  16. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard

    PubMed Central

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong

    2014-01-01

    Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817

  17. Turning Interoperability Operational with GST

    NASA Astrophysics Data System (ADS)

    Schaeben, Helmut; Gabriel, Paul; Gietzel, Jan; Le, Hai Ha

    2013-04-01

    GST - Geosciences in space and time is being developed and implemented as hub to facilitate the exchange of spatially and temporally indexed multi-dimensional geoscience data and corresponding geomodels amongst partners. It originates from TUBAF's contribution to the EU project "ProMine" and its perspective extensions are TUBAF's contribution to the actual EU project "GeoMol". As of today, it provides basic components of a geodata infrastructure as required to establish interoperability with respect to geosciences. Generally, interoperability means the facilitation of cross-border and cross-sector information exchange, taking into account legal, organisational, semantic and technical aspects, cf. Interoperability Solutions for European Public Administrations (ISA), cf. http://ec.europa.eu/isa/. Practical interoperability for partners of a joint geoscience project, say European Geological Surveys acting in a border region, means in particular provision of IT technology to exchange spatially and maybe additionally temporally indexed multi-dimensional geoscience data and corresponding models, i.e. the objects composing geomodels capturing the geometry, topology, and various geoscience contents. Geodata Infrastructure (GDI) and interoperability are objectives of several inititatives, e.g. INSPIRE, OneGeology-Europe, and most recently EGDI-SCOPE to name just the most prominent ones. Then there are quite a few markup languages (ML) related to geographical or geological information like GeoSciML, EarthResourceML, BoreholeML, ResqML for reservoir characterization, earth and reservoir models, and many others featuring geoscience information. Several Web Services are focused on geographical or geoscience information. The Open Geospatial Consortium (OGC) promotes specifications of a Web Feature Service (WFS), a Web Map Service (WMS), a Web Coverage Serverice (WCS), a Web 3D Service (W3DS), and many more. It will be clarified how GST is related to these initiatives, especially

  18. Effects of Semantic Web Based Learning on Pre-Service Teachers' ICT Learning Achievement and Satisfaction

    ERIC Educational Resources Information Center

    Karalar, Halit; Korucu, Agah Tugrul

    2016-01-01

    Although the Semantic Web offers many opportunities for learners, effects of it in the classroom is not well known. Therefore, in this study explanations have been stated as how the learning objects defined by means of using the terminology in a developed ontology and kept in objects repository should be presented to learners with the aim of…

  19. Clinical data interoperability based on archetype transformation.

    PubMed

    Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2011-10-01

    The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation.

  20. The GEOSS solution for enabling data interoperability and integrative research.

    PubMed

    Nativi, Stefano; Mazzetti, Paolo; Craglia, Max; Pirrone, Nicola

    2014-03-01

    Global sustainability research requires an integrative research effort underpinned by digital infrastructures (systems) able to harness data and heterogeneous information across disciplines. Digital data and information sharing across systems and applications is achieved by implementing interoperability: a property of a product or system to work with other products or systems, present or future. There are at least three main interoperability challenges a digital infrastructure must address: technological, semantic, and organizational. In recent years, important international programs and initiatives are focusing on such an ambitious objective. This manuscript presents and combines the studies and the experiences carried out by three relevant projects, focusing on the heavy metal domain: Global Mercury Observation System, Global Earth Observation System of Systems (GEOSS), and INSPIRE. This research work recognized a valuable interoperability service bus (i.e., a set of standards models, interfaces, and good practices) proposed to characterize the integrative research cyber-infrastructure of the heavy metal research community. In the paper, the GEOSS common infrastructure is discussed implementing a multidisciplinary and participatory research infrastructure, introducing a possible roadmap for the heavy metal pollution research community to join GEOSS as a new Group on Earth Observation community of practice and develop a research infrastructure for carrying out integrative research in its specific domain. PMID:24243262

  1. ARTEMIS: towards a secure interoperability infrastructure for healthcare information systems.

    PubMed

    Boniface, Mike; Wilken, Paul

    2005-01-01

    The ARTEMIS project is developing a semantic web service based P2P interoperability infrastructure for healthcare information systems. The strict legislative framework in which these systems are deployed means that the interoperability of security and privacy mechanisms is an important requirement in supporting communication of electronic healthcare records across organisation boundaries. In ARTEMIS, healthcare providers define semantically annotated security and privacy policies for web services based on organisational requirements. The ARTEMIS mediator uses these semantic web service descriptions to broker between organisational policies by reasoning over security and clinical concept ontologies.

  2. Combining Archetypes with Fast Health Interoperability Resources in Future-proof Health Information Systems.

    PubMed

    Bosca, Diego; Moner, David; Maldonado, Jose Alberto; Robles, Montserrat

    2015-01-01

    Messaging standards, and specifically HL7 v2, are heavily used for the communication and interoperability of Health Information Systems. HL7 FHIR was created as an evolution of the messaging standards to achieve semantic interoperability. FHIR is somehow similar to other approaches like the dual model methodology as both are based on the precise modeling of clinical information. In this paper, we demonstrate how we can apply the dual model methodology to standards like FHIR. We show the usefulness of this approach for data transformation between FHIR and other specifications such as HL7 CDA, EN ISO 13606, and openEHR. We also discuss the advantages and disadvantages of defining archetypes over FHIR, and the consequences and outcomes of this approach. Finally, we exemplify this approach by creating a testing data server that supports both FHIR resources and archetypes. PMID:25991126

  3. Standards-based data interoperability in the climate sciences

    NASA Astrophysics Data System (ADS)

    Woolf, Andrew; Cramer, Ray; Gutierrez, Marta; Kleese van Dam, Kerstin; Kondapalli, Siva; Latham, Susan; Lawrence, Bryan; Lowry, Roy; O'Neill, Kevin

    2005-03-01

    Emerging developments in geographic information systems and distributed computing offer a roadmap towards an unprecedented spatial data infrastructure in the climate sciences. Key to this are the standards developments for digital geographic information being led by the International Organisation for Standardisation (ISO) technical committee on geographic information/geomatics (TC211) and the Open Geospatial Consortium (OGC). These, coupled with the evolution of standardised web services for applications on the internet by the World Wide Web Consortium (W3C), mean that opportunities for both new applications and increased interoperability exist. These are exemplified by the ability to construct ISO-compliant data models that expose legacy data sources through OGC web services. This paper concentrates on the applicability of these standards to climate data by introducing some examples and outlining the challenges ahead. An abstract data model is developed, based on ISO standards, and applied to a range of climate data both observational and modelled. An OGC Web Map Server interface is constructed for numerical weather prediction (NWP) data stored in legacy data files. A W3C web service for remotely accessing gridded climate data is illustrated. Challenges identified include the following: first, both the ISO and OGC specifications require extensions to support climate data. Secondly, OGC services need to fully comply with W3C web services, and support complex access control. Finally, to achieve real interoperability, broadly accepted community-based semantic data models are required across the range of climate data types. These challenges are being actively pursued, and broad data interoperability for the climate sciences appears within reach.

  4. Groundwater data network interoperability

    USGS Publications Warehouse

    Brodaric, Boyan; Booth, Nathaniel; Boisvert, Eric; Lucido, Jessica M.

    2016-01-01

    Water data networks are increasingly being integrated to answer complex scientific questions that often span large geographical areas and cross political borders. Data heterogeneity is a major obstacle that impedes interoperability within and between such networks. It is resolved here for groundwater data at five levels of interoperability, within a Spatial Data Infrastructure architecture. The result is a pair of distinct national groundwater data networks for the United States and Canada, and a combined data network in which they are interoperable. This combined data network enables, for the first time, transparent public access to harmonized groundwater data from both sides of the shared international border.

  5. Towards an interoperable International Lattice Datagrid

    SciTech Connect

    G. Beckett; P. Coddington; N. Ishii; B. Joo; D. Melkumyan; R. Ostrowski; D. Pleiter; M. Sato; J. Simone; C. Watson; S. Zhang

    2007-11-01

    The International Lattice Datagrid (ILDG) is a federation of several regional grids. Since most of these grids have reached production level, an increasing number of lattice scientists start to benefit from this new research infrastructure. The ILDG Middleware Working Group has the task of specifying the ILDG middleware such that interoperability among the different grids is achieved. In this paper we will present the architecture of the ILDG middleware and describe what has actually been achieved in recent years. Particular focus is given to interoperability and security issues. We will conclude with a short overview on issues which we plan to address in the near future.

  6. The Semantic SPASE

    NASA Astrophysics Data System (ADS)

    Hughes, S.; Crichton, D.; Thieman, J.; Ramirez, P.; King, T.; Weiss, M.

    2005-12-01

    The Semantic SPASE (Space Physics Archive Search and Extract) prototype demonstrates the use of semantic web technologies to capture, document, and manage the SPASE data model, support facet- and text-based search, and provide flexible and intuitive user interfaces. The SPASE data model, under development since late 2003 by a consortium of space physics domain experts, is intended to serve as the basis for interoperability between independent data systems. To develop the Semantic SPASE prototype, the data model was first analyzed to determine the inherit object classes and their attributes. These were entered into Stanford Medical Informatics' Protege ontology tool and annotated using definitions from the SPASE documentation. Further analysis of the data model resulted in the addition of class relationships. Finally attributes and relationships that support broad-scope interoperability were added from research associated with the Object-Oriented Data Technology task. To validate the ontology and produce a knowledge base, example data products were ingested. The capture of the data model as an ontology results in a more formal specification of the model. The Protege software is also a powerful management tool and supports plug-ins that produce several graphical notations as output. The stated purpose of the semantic web is to support machine understanding of web-based information. Protege provides an export capability to RDF/XML and RDFS/XML for this purpose. Several research efforts use RDF/XML knowledge bases to provide semantic search. MIT's Simile/Longwell project provides both facet- and text-based search using a suite of metadata browsers and the text-based search engine Lucene. Using the Protege generated RDF knowledge-base a semantic search application was easily built and deployed to run as a web application. Configuration files specify the object attributes and values to be designated as facets (i.e. search) constraints. Semantic web technologies provide

  7. Buildings Interoperability Landscape

    SciTech Connect

    Hardin, Dave; Stephan, Eric G.; Wang, Weimin; Corbin, Charles D.; Widergren, Steven E.

    2015-12-31

    Through its Building Technologies Office (BTO), the United States Department of Energy’s Office of Energy Efficiency and Renewable Energy (DOE-EERE) is sponsoring an effort to advance interoperability for the integration of intelligent buildings equipment and automation systems, understanding the importance of integration frameworks and product ecosystems to this cause. This is important to BTO’s mission to enhance energy efficiency and save energy for economic and environmental purposes. For connected buildings ecosystems of products and services from various manufacturers to flourish, the ICT aspects of the equipment need to integrate and operate simply and reliably. Within the concepts of interoperability lie the specification, development, and certification of equipment with standards-based interfaces that connect and work. Beyond this, a healthy community of stakeholders that contribute to and use interoperability work products must be developed. On May 1, 2014, the DOE convened a technical meeting to take stock of the current state of interoperability of connected equipment and systems in buildings. Several insights from that meeting helped facilitate a draft description of the landscape of interoperability for connected buildings, which focuses mainly on small and medium commercial buildings. This document revises the February 2015 landscape document to address reviewer comments, incorporate important insights from the Buildings Interoperability Vision technical meeting, and capture thoughts from that meeting about the topics to be addressed in a buildings interoperability vision. In particular, greater attention is paid to the state of information modeling in buildings and the great potential for near-term benefits in this area from progress and community alignment.

  8. The Relationship Between Responses to Science Concepts on a Semantic Differential Instrument and Achievement in Freshman Physics and Chemistry.

    ERIC Educational Resources Information Center

    Rothman, Arthur Israel

    Students taking freshman physics and freshman chemistry at The State University of New York at Buffalo (SUNYAB) were administered a science-related semantic differential instrument. This same test was administered to physics and chemistry graduate students from SUNYAB and the University of Rochester. A scoring procedure was developed which…

  9. 23 CFR 950.7 - Interoperability requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... facility; and (3) Identify the noncash electronic technology likely to be in use within the next five years... FHWA that the selected toll collection system and technology achieves the highest reasonable degree of interoperability both with technology currently in use at other existing toll facilities and with technology...

  10. Architecture for interoperable software in biology.

    PubMed

    Bare, James Christopher; Baliga, Nitin S

    2014-07-01

    Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures--list, matrix, network, table and tuple--that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization.

  11. Architecture for interoperable software in biology

    PubMed Central

    Baliga, Nitin S.

    2014-01-01

    Understanding biological complexity demands a combination of high-throughput data and interdisciplinary skills. One way to bring to bear the necessary combination of data types and expertise is by encapsulating domain knowledge in software and composing that software to create a customized data analysis environment. To this end, simple flexible strategies are needed for interconnecting heterogeneous software tools and enabling data exchange between them. Drawing on our own work and that of others, we present several strategies for interoperability and their consequences, in particular, a set of simple data structures—list, matrix, network, table and tuple—that have proven sufficient to achieve a high degree of interoperability. We provide a few guidelines for the development of future software that will function as part of an interoperable community of software tools for biological data analysis and visualization. PMID:23235920

  12. MENTOR: an enabler for interoperable intelligent systems

    NASA Astrophysics Data System (ADS)

    Sarraipa, João; Jardim-Goncalves, Ricardo; Steiger-Garcao, Adolfo

    2010-07-01

    A community with knowledge organisation based on ontologies will enable an increase in the computational intelligence of its information systems. However, due to the worldwide diversity of communities, a high number of knowledge representation elements, which are not semantically coincident, have appeared representing the same segment of reality, becoming a barrier to business communications. Even if a domain community uses the same kind of technologies in its information systems, such as ontologies, it doesn't solve its semantics differences. In order to solve this interoperability problem, a solution is to use a reference ontology as an intermediary in the communications between the community enterprises and the outside, while allowing the enterprises to keep their own ontology and semantics unchanged internally. This work proposes MENTOR, a methodology to support the development of a common reference ontology for a group of organisations sharing the same business domain. This methodology is based on the mediator ontology (MO) concept, which assists the semantic transformations among each enterprise's ontology and the referential one. The MO enables each organisation to keep its own terminology, glossary and ontological structures, while providing seamless communication and interaction with the others.

  13. IMPI: Making MPI Interoperable

    PubMed Central

    George, William L.; Hagedorn, John G.; Devaney, Judith E.

    2000-01-01

    The Message Passing Interface (MPI) is the de facto standard for writing parallel scientific applications in the message passing programming paradigm. Implementations of MPI were not designed to interoperate, thereby limiting the environments in which parallel jobs could be run. We briefly describe a set of protocols, designed by a steering committee of current implementors of MPI, that enable two or more implementations of MPI to interoperate within a single application. Specifically, we introduce the set of protocols collectively called Interoperable MPI (IMPI). These protocols make use of novel techniques to handle difficult requirements such as maintaining interoperability among all IMPI implementations while also allowing for the independent evolution of the collective communication algorithms used in IMPI. Our contribution to this effort has been as a facilitator for meetings, editor of the IMPI Specification document, and as an early testbed for implementations of IMPI. This testbed is in the form of an IMPI conformance tester, a system that can verify the correct operation of an IMPI-enabled version of MPI. PMID:27551614

  14. Empowering open systems through cross-platform interoperability

    NASA Astrophysics Data System (ADS)

    Lyke, James C.

    2014-06-01

    Most of the motivations for open systems lie in the expectation of interoperability, sometimes referred to as "plug-and-play". Nothing in the notion of "open-ness", however, guarantees this outcome, which makes the increased interest in open architecture more perplexing. In this paper, we explore certain themes of open architecture. We introduce the concept of "windows of interoperability", which can be used to align disparate portions of architecture. Such "windows of interoperability", which concentrate on a reduced set of protocol and interface features, might achieve many of the broader purposes assigned as benefits in open architecture. Since it is possible to engineer proprietary systems that interoperate effectively, this nuanced definition of interoperability may in fact be a more important concept to understand and nurture for effective systems engineering and maintenance.

  15. Model and Interoperability using Meta Data Annotations

    NASA Astrophysics Data System (ADS)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and

  16. Maturity model for enterprise interoperability

    NASA Astrophysics Data System (ADS)

    Guédria, Wided; Naudet, Yannick; Chen, David

    2015-01-01

    Historically, progress occurs when entities communicate, share information and together create something that no one individually could do alone. Moving beyond people to machines and systems, interoperability is becoming a key factor of success in all domains. In particular, interoperability has become a challenge for enterprises, to exploit market opportunities, to meet their own objectives of cooperation or simply to survive in a growing competitive world where the networked enterprise is becoming a standard. Within this context, many research works have been conducted over the past few years and enterprise interoperability has become an important area of research, ensuring the competitiveness and growth of European enterprises. Among others, enterprises have to control their interoperability strategy and enhance their ability to interoperate. This is the purpose of the interoperability assessment. Assessing interoperability maturity allows a company to know its strengths and weaknesses in terms of interoperability with its current and potential partners, and to prioritise actions for improvement. The objective of this paper is to define a maturity model for enterprise interoperability that takes into account existing maturity models while extending the coverage of the interoperability domain. The assessment methodology is also presented. Both are demonstrated with a real case study.

  17. National Flood Interoperability Experiment

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.

    2014-12-01

    The National Flood Interoperability Experiment is led by the academic community in collaboration with the National Weather Service through the new National Water Center recently opened on the Tuscaloosa campus of the University of Alabama. The experiment will also involve the partners in IWRSS (Integrated Water Resources Science and Services), which include the USGS, the Corps of Engineers and FEMA. The experiment will address the following questions: (1) How can near-real-time hydrologic forecasting at high spatial resolution, covering the nation, be carried out using the NHDPlus or next generation geofabric (e.g. hillslope, watershed scales)? (2) How can this lead to improved emergency response and community resilience? (3) How can improved an improved interoperability framework support the first two goals and lead to sustained innovation in the research to operations process? The experiment will run from September 2014 through August 2015, in two phases. The mobilization phase from September 2014 until May 2015 will assemble the components of the interoperability framework. A Summer Institute to integrate the components will be held from June to August 2015 at the National Water Center involving faculty and students from the University of Alabama and other institutions coordinated by CUAHSI. It is intended that the insight that arises from this experiment will help lay the foundation for a new national scale, high spatial resolution, near-real-time hydrologic simulation system for the United States.

  18. The semantic web in translational medicine: current applications and future directions

    PubMed Central

    Machado, Catia M.; Rebholz-Schuhmann, Dietrich; Freitas, Ana T.; Couto, Francisco M.

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933

  19. The semantic web in translational medicine: current applications and future directions.

    PubMed

    Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice.

  20. Best Practices for Preparing Interoperable Geospatial Data

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Beaty, T. W.

    2010-12-01

    mechanisms to advertise, visualize, and distribute the standardized geospatial data. In this presentation, we summarize the experiences learned and the best practices for geospatial data standardization. The presentation will describe how diverse and historical data archived in the ORNL DAAC were converted into standard and non-proprietary formats; what tools were used to make the conversion; how the spatial and temporal information are properly captured in a consistent manor; how to name a data file or a variable to make it both human-friendly and semantically interoperable; how NetCDF file format and CF convention can promote the data usage in ecosystem modeling user community; how those standardized geospatial data can be fed into OGC Web Services to support on-demand data visualization and access; and how the metadata should be collected and organized so that they can be discovered through standard catalog services.

  1. Interoperability of heterogeneous distributed systems

    NASA Astrophysics Data System (ADS)

    Zaschke, C.; Essendorfer, B.; Kerth, C.

    2016-05-01

    To achieve knowledge superiority in today's operations interoperability is the key. Budget restrictions as well as the complexity and multiplicity of threats combined with the fact that not single nations but whole areas are subject to attacks force nations to collaborate and share information as appropriate. Multiple data and information sources produce different kinds of data, real time and non-real time, in different formats that are disseminated to the respective command and control level for further distribution. The data is most of the time highly sensitive and restricted in terms of sharing. The question is how to make this data available to the right people at the right time with the right granularity. The Coalition Shared Data concept aims to provide a solution to these questions. It has been developed within several multinational projects and evolved over time. A continuous improvement process was established and resulted in the adaptation of the architecture as well as the technical solution and the processes it supports. Coming from the idea of making use of existing standards and basing the concept on sharing of data through standardized interfaces and formats and enabling metadata based query the concept merged with a more sophisticated service based approach. The paper addresses concepts for information sharing to facilitate interoperability between heterogeneous distributed systems. It introduces the methods that were used and the challenges that had to be overcome. Furthermore, the paper gives a perspective how the concept could be used in the future and what measures have to be taken to successfully bring it into operations.

  2. Leveraging the Semantic Web for Adaptive Education

    ERIC Educational Resources Information Center

    Kravcik, Milos; Gasevic, Dragan

    2007-01-01

    In the area of technology-enhanced learning reusability and interoperability issues essentially influence the productivity and efficiency of learning and authoring solutions. There are two basic approaches how to overcome these problems--one attempts to do it via standards and the other by means of the Semantic Web. In practice, these approaches…

  3. Developing Interoperable Air Quality Community Portals

    NASA Astrophysics Data System (ADS)

    Falke, S. R.; Husar, R. B.; Yang, C. P.; Robinson, E. M.; Fialkowski, W. E.

    2009-04-01

    Web portals are intended to provide consolidated discovery, filtering and aggregation of content from multiple, distributed web sources targeted at particular user communities. This paper presents a standards-based information architectural approach to developing portals aimed at air quality community collaboration in data access and analysis. An important characteristic of the approach is to advance beyond the present stand-alone design of most portals to achieve interoperability with other portals and information sources. We show how using metadata standards, web services, RSS feeds and other Web 2.0 technologies, such as Yahoo! Pipes and del.icio.us, helps increase interoperability among portals. The approach is illustrated within the context of the GEOSS Architecture Implementation Pilot where an air quality community portal is being developed to provide a user interface between the portals and clearinghouse of the GEOSS Common Infrastructure and the air quality community catalog of metadata and data services.

  4. An Approach towards Enterprise Interoperability Assessment

    NASA Astrophysics Data System (ADS)

    Razavi, Mahsa; Aliee, Fereidoon Shams

    Enterprise Architecture (EA) as a discipline with numerous and enterprise-wide models, can support decision making on enterprise-wide issues. In order to provide such support, EA models should be amenable to analysis of various utilities and quality attributes. This paper provides a method towards EA interoperability analysis. This approach is based on Analytical Hierarchy Process (AHP) and considers the situation of the enterprise in giving weight to the different criteria and sub criteria of each utility. It proposes a quantitative method of assessing Interoperability achievement of different scenarios using AHP based on the knowledge and experience of EA experts and domain experts, and helps in deciding between them. The applicability of the proposed approach is demonstrated using a practical case study.

  5. An Interoperability Testing Study: Automotive Inventory Visibility and Interoperability

    SciTech Connect

    Ivezic, Nenad; Kulvatunyou, Boonserm; Frechette, Simon; Jones, Albert

    2004-01-01

    This paper describes a collaborative effort between the NIST and Korean Business-to-Business Interoperability Test Beds to support a global, automotive-industry interoperability project. The purpose of the collaboration is to develop a methodology for validation of interoperable data-content standards implemented across inventory visibility tools within an internationally adopted testing framework. In this paper we describe methods (1) to help the vendors consistently implement prescribed message standards and (2) to assess compliance of those implementations with respect to the prescribed data content standards. We also illustrate these methods in support of an initial proof of concept for an international IV&I scenario.

  6. Extending the GI Brokering Suite to Support New Interoperability Specifications

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by

  7. A semantically rich and standardised approach enhancing discovery of sensor data and metadata

    NASA Astrophysics Data System (ADS)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise

    2016-04-01

    The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor

  8. Enhanced semantic interpretability by healthcare standards profiling.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2008-01-01

    Several current healthcare standards support semantic interoperability. These standards are far to be completely adopted in health information system development, however. The objective of this paper is to provide a method and necessary tooling for reusing healthcare standards by exploiting the extensibility mechanisms of UML, by that way supporting the development of semantically interoperable systems and components. The method identifies first the models and tasks in the software development process in which health care standards can be reused. Then, the selected standard is formalized as a UML profile. Finally that profile is applied to system models, annotating them with the standard semantics. The supporting tools are Eclipse-based UML modeling tools. The method is integrated into a comprehensive framework for health information systems development. The feasibility of the approach is exemplified by a scenario reusing HL7 RIM and DIMs specifications. The approach presented is also applicable for harmonizing different standard specifications.

  9. A study on heterogeneous distributed spatial information platform based on semantic Web services

    NASA Astrophysics Data System (ADS)

    Peng, Shuang-yun; Yang, Kun; Xu, Quan-li; Huang, Bang-mei

    2008-10-01

    With the development of Semantic Web technology, the spatial information service based on ontology is an effective way for sharing and interoperation of heterogeneous information resources in the distributed network environment. This paper discusses spatial information sharing and interoperability in the Semantic Web Services architecture. Through using Ontology record spatial information in sharing knowledge system, explicit and formalization expresses the default and the concealment semantic information. It provides the prerequisite for spatial information sharing and interoperability; Through Semantic Web Services technology parses Ontology and intelligent buildings services under network environment, form a network of services. In order to realize the practical applications of spatial information sharing and interoperation in different brunches of CDC system, a prototype system for HIV/AIDS information sharing based on geo-ontology has also been developed by using the methods described above.

  10. Interoperability among Space Station Freedom data systems - A field test of standards

    NASA Technical Reports Server (NTRS)

    Whitelaw, Virginia A.; Marker, Walter S.

    1990-01-01

    A development status evaluation is presented for NASA efforts toward the achievement of interoperability among Space Station Freedom (SSF) data systems, despite the complexity created by the number and wide distribution of such systems, the intensive international participation, and the use of time-phased development cycles. Four areas have been identified as essential: communications interoperability, data interoperability, and crew and payload interface interoperability. Accelerated efforts are needed to define and refine detailed interface designs before developments by each of the SSF's international partners come to take fundamentally incompatible courses.

  11. Political, policy and social barriers to health system interoperability: emerging opportunities of Web 2.0 and 3.0.

    PubMed

    Juzwishin, Donald W M

    2009-01-01

    Achieving effective health informatics interoperability in a fragmented and uncoordinated health system is by definition not possible. Interoperability requires the simultaneous integration of health care processes and information across different types and levels of care (systems thinking). The fundamental argument of this paper is that information system interoperability will remain an unfulfilled hope until health reforms effectively address the governance (accountability), structural and process barriers to interoperability of health care delivery. The ascendency of Web 2.0 and 3.0, although still unproven, signals the opportunity to accelerate patients' access to health information and their health record. Policy suggestions for simultaneously advancing health system delivery and information system interoperability are posited.

  12. Generative Semantics.

    ERIC Educational Resources Information Center

    King, Margaret

    The first section of this paper deals with the attempts within the framework of transformational grammar to make semantics a systematic part of linguistic description, and outlines the characteristics of the generative semantics position. The second section takes a critical look at generative semantics in its later manifestations, and makes a case…

  13. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility.

    PubMed

    Marcos, Mar; Maldonado, Jose A; Martínez-Salvador, Begoña; Boscá, Diego; Robles, Montserrat

    2013-08-01

    patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development. PMID:23707417

  14. Smart Grid Interoperability Maturity Model

    SciTech Connect

    Widergren, Steven E.; Levinson, Alex; Mater, J.; Drummond, R.

    2010-04-28

    The integration of automation associated with electricity resources (including transmission and distribution automation and demand-side resources operated by end-users) is key to supporting greater efficiencies and incorporating variable renewable resources and electric vehicles into the power system. The integration problems faced by this community are analogous to those faced in the health industry, emergency services, and other complex communities with many stakeholders. To highlight this issue and encourage communication and the development of a smart grid interoperability community, the GridWise Architecture Council (GWAC) created an Interoperability Context-Setting Framework. This "conceptual model" has been helpful to explain the importance of organizational alignment in addition to technical and informational interface specifications for "smart grid" devices and systems. As a next step to building a community sensitive to interoperability, the GWAC is investigating an interoperability maturity model (IMM) based on work done by others to address similar circumstances. The objective is to create a tool or set of tools that encourages a culture of interoperability in this emerging community. The tools would measure status and progress, analyze gaps, and prioritize efforts to improve the situation.

  15. Advancing Smart Grid Interoperability and Implementing NIST's Interoperability Roadmap

    SciTech Connect

    Basso,T.; DeBlasio, R.

    2010-04-01

    The IEEE American National Standards project P2030TM addressing smart grid interoperability and the IEEE 1547 series of standards addressing distributed resources interconnection with the grid have been identified in priority action plans in the Report to NIST on the Smart Grid Interoperability Standards Roadmap. This paper presents the status of the IEEE P2030 development, the IEEE 1547 series of standards publications and drafts, and provides insight on systems integration and grid infrastructure. The P2030 and 1547 series of standards are sponsored by IEEE Standards Coordinating Committee 21.

  16. Community-oriented Implementation of Interoperability Standards (Invited)

    NASA Astrophysics Data System (ADS)

    Falke, S. R.

    2010-12-01

    Standards are necessary for interoperability but alone they are insufficient for attaining interoperability among information systems. An important characteristic, and key challenge, of interoperability is the implementation of standards. Standards are interpreted differently by different organizations and a result is a lack of interoperability despite each organization being able to rightfully claim they support standards. In this talk, the focus is on data access standards, particularly the spatial-temporal filtering and subsetting through the Open Geospatial Consortium (OGC) Web Coverage Service (WCS), Web Map Service (WMS), and Sensor Observation Service (SOS) standards. The talk will highlight a technology infusion strategy of collaboratively working within a domain community in order to achieve standards implementation conventions - commonly accepted methods of implementing standards across a particular community. The approach of community-oriented conventions for standards implementation allows multiple groups to bring their individual approaches to the table, share experiences, identify particular aspects of the standards where they must reconcile differences, and develop sets of best practices for others in the community to follow for creating networked and interoperable web services. The primary example used to highlight the approach is the ongoing interoperability efforts of the Committee on Earth Observation Satellites (CEOS) Atmospheric Composition Portal that is collaboratively developing best practices for implementing standards in publishing and using remotely sensed atmospheric composition data. Secondary examples are provided from the Global Earth Observation System of Systems (GEOSS) Architecture Implementation Pilot (AIP) Air Quality & Health Working Group, Geo-interface for Air, Land, Earth, Oceans NetCDF Interoperability Experiment (GALEON), and Federation of Earth Science Information Partners (ESIP) Air Quality Working Group.

  17. D-ATM, a working example of health care interoperability: From dirt path to gravel road.

    PubMed

    DeClaris, John-William

    2009-01-01

    For many years, there have been calls for interoperability within health care systems. The technology currently exists and is being used in business areas like banking and commerce, to name a few. Yet the question remains, why has interoperability not been achieved in health care? This paper examines issues encountered and success achieved with interoperability during the development of the Digital Access To Medication (D-ATM) project, sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA). D-ATM is the first government funded interoperable patient management system. The goal of this paper is to provide lessons learned and propose one possible road map for health care interoperability within private industry and how government can help. PMID:19963614

  18. Internet-Based Solutions for Manufacturing Enterprise Systems Interoperability - A Standards Perspective

    SciTech Connect

    Ivezic, Nenad; Kulvatunyou, Boonserm; Jones, Albert

    2004-10-01

    This chapter reviews efforts of selected standards consortia to develop Internet-based approaches for interoperable manufacturing enterprise information systems. The focus of the chapter is on the efforts to capture common meaning of data exchanged among interoperable information systems inside and outside a manufacturing enterprise. We start this chapter by giving a general overview of the key concepts in standards approaches to enable interoperable manufacturing enterprise systems. These approaches are compared on the basis of several characteristics found in standards frameworks such as horizontal or vertical focus of the standard, the standard message content definitions, the standard process definitions, and dependence on specific standard messaging solutions. After this initial overview, we establish one basis for reasoning about interoperable information systems by recognizing key manufacturing enterprise objects managed and exchanged both inside and outside the enterprise. Such conceptual objects are coarse in granularity and are meant to drive semantic definitions of data interchanges by providing a shared context for data dictionaries detailing the semantics of these objects and interactions or processes involved in data exchange. In the case of intra-enterprise interoperability, we recognize enterprise information processing activities, responsibilities, and those high-level conceptual objects exchanged in interactions among systems to fulfill the assigned responsibilities. Here, we show a mapping of one content standard onto the identified conceptual objects. In the case of inter-enterprise interoperability, we recognize key business processes areas and enumerate high-level conceptual objects that need to be exchanged among supply chain or trading partners. Here, we also show example mappings of representative content standards onto the identified conceptual objects. We complete this chapter by providing an account of some advanced work to enhance

  19. Exploring NASA GES DISC Data with Interoperable Services

    NASA Technical Reports Server (NTRS)

    Zhao, Peisheng; Yang, Wenli; Hegde, Mahabal; Wei, Jennifer C.; Kempler, Steven; Pham, Long; Teng, William; Savtchenko, Andrey

    2015-01-01

    Overview of NASA GES DISC (NASA Goddard Earth Science Data and Information Services Center) data with interoperable services: Open-standard and Interoperable Services Improve data discoverability, accessibility, and usability with metadata, catalogue and portal standards Achieve data, information and knowledge sharing across applications with standardized interfaces and protocols Open Geospatial Consortium (OGC) Data Services and Specifications Web Coverage Service (WCS) -- data Web Map Service (WMS) -- pictures of data Web Map Tile Service (WMTS) --- pictures of data tiles Styled Layer Descriptors (SLD) --- rendered styles.

  20. CCP interoperability and system stability

    NASA Astrophysics Data System (ADS)

    Feng, Xiaobing; Hu, Haibo

    2016-09-01

    To control counterparty risk, financial regulations such as the Dodd-Frank Act are increasingly requiring standardized derivatives trades to be cleared by central counterparties (CCPs). It is anticipated that in the near term future, CCPs across the world will be linked through interoperability agreements that facilitate risk sharing but also serve as a conduit for transmitting shocks. This paper theoretically studies a networked network with CCPs that are linked through interoperability arrangements. The major finding is that the different configurations of networked network CCPs contribute to the different properties of the cascading failures.

  1. Building a logical EHR architecture based on ISO 13606 standard and semantic web technologies.

    PubMed

    Santos, Marcelo R; Bax, Marcello P; Kalra, Dipak

    2010-01-01

    Among the existing patterns of EHR interoperability, the ISO 13606 standard is an important consideration. It is believed that the use of this norm, in conjunction with semantic technologies, may aid in the construction of a robust architecture, keeping in mind the challenges of semantic interoperability. The objective of this paper is to present a proposal for an EHR architecture, based on ISO 13606 and on the utilization of semantic technologies, for a real EHR scenario. In order to accomplish that, a real EHR scenario is described, as well as its main interoperability requirements and a candidate architecture is proposed to solve the presented challenges of interoperability. The ability of the ISO 13606 EHR reference model to accommodate the scenario was highlighted, together with the support provided by the use of the ontology specification languages--RDF and OWL--in respect to the maintenance of a controlled vocabulary.

  2. Dynamic Business Networks: A Headache for Sustainable Systems Interoperability

    NASA Astrophysics Data System (ADS)

    Agostinho, Carlos; Jardim-Goncalves, Ricardo

    Collaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property. When achieved seamlessly, efficiency is increased in the entire product life cycle. Nowadays, most organizations try to attain interoperability by establishing peer-to-peer mappings with the different partners, or in optimized networks, by using international standard models as the core for information exchange. In current industrial practice, mappings are only defined once, and the morphisms that represent them, are hardcoded in the enterprise systems. This solution has been effective for static environments, where enterprise and product models are valid for decades. However, with an increasingly complex and dynamic global market, models change frequently to answer new customer requirements. This paper draws concepts from the complex systems science and proposes a framework for sustainable systems interoperability in dynamic networks, enabling different organizations to evolve at their own rate.

  3. An ontological system for interoperable spatial generalisation in biodiversity monitoring

    NASA Astrophysics Data System (ADS)

    Nieland, Simon; Moran, Niklas; Kleinschmit, Birgit; Förster, Michael

    2015-11-01

    Semantic heterogeneity remains a barrier to data comparability and standardisation of results in different fields of spatial research. Because of its thematic complexity, differing acquisition methods and national nomenclatures, interoperability of biodiversity monitoring information is especially difficult. Since data collection methods and interpretation manuals broadly vary there is a need for automatised, objective methodologies for the generation of comparable data-sets. Ontology-based applications offer vast opportunities in data management and standardisation. This study examines two data-sets of protected heathlands in Germany and Belgium which are based on remote sensing image classification and semantically formalised in an OWL2 ontology. The proposed methodology uses semantic relations of the two data-sets, which are (semi-)automatically derived from remote sensing imagery, to generate objective and comparable information about the status of protected areas by utilising kernel-based spatial reclassification. This automatised method suggests a generalisation approach, which is able to generate delineation of Special Areas of Conservation (SAC) of the European biodiversity Natura 2000 network. Furthermore, it is able to transfer generalisation rules between areas surveyed with varying acquisition methods in different countries by taking into account automated inference of the underlying semantics. The generalisation results were compared with the manual delineation of terrestrial monitoring. For the different habitats in the two sites an accuracy of above 70% was detected. However, it has to be highlighted that the delineation of the ground-truth data inherits a high degree of uncertainty, which is discussed in this study.

  4. A nursing information model process for interoperability.

    PubMed

    Chow, Marilyn; Beene, Murielle; O'Brien, Ann; Greim, Patricia; Cromwell, Tim; DuLong, Donna; Bedecarré, Diane

    2015-05-01

    The ability to share nursing data across organizations and electronic health records is a key component of improving care coordination and quality outcomes. Currently, substantial organizational and technical barriers limit the ability to share and compare essential patient data that inform nursing care. Nursing leaders at Kaiser Permanente and the U.S. Department of Veterans Affairs collaborated on the development of an evidence-based information model driven by nursing practice to enable data capture, re-use, and sharing between organizations and disparate electronic health records. This article describes a framework with repeatable steps and processes to enable the semantic interoperability of relevant and contextual nursing data. Hospital-acquired pressure ulcer prevention was selected as the prototype nurse-sensitive quality measure to develop and test the model. In a Health 2.0 Developer Challenge program from the Office of the National Coordinator for Health, mobile applications implemented the model to help nurses assess the risk of hospital-acquired pressure ulcers and reduce their severity. The common information model can be applied to other nurse-sensitive measures to enable data standardization supporting patient transitions between care settings, quality reporting, and research.

  5. Improving Groundwater Data Interoperability: Results of the Second OGC Groundwater Interoperability Experiment

    NASA Astrophysics Data System (ADS)

    Lucido, J. M.; Booth, N.

    2014-12-01

    Interoperable sharing of groundwater data across international boarders is essential for the proper management of global water resources. However storage and management of groundwater data is often times distributed across many agencies or organizations. Furthermore these data may be represented in disparate proprietary formats, posing a significant challenge for integration. For this reason standard data models are required to achieve interoperability across geographical and political boundaries. The GroundWater Markup Language 1.0 (GWML1) was developed in 2010 as an extension of the Geography Markup Language (GML) in order to support groundwater data exchange within Spatial Data Infrastructures (SDI). In 2013, development of GWML2 was initiated under the sponsorship of the Open Geospatial Consortium (OGC) for intended adoption by the international community as the authoritative standard for the transfer of groundwater feature data, including data about water wells, aquifers, and related entities. GWML2 harmonizes GWML1 and the EU's INSPIRE models related to geology and hydrogeology. Additionally, an interoperability experiment was initiated to test the model for commercial, technical, scientific, and policy use cases. The scientific use case focuses on the delivery of data required for input into computational flow modeling software used to determine the flow of groundwater within a particular aquifer system. It involves the delivery of properties associated with hydrogeologic units, observations related to those units, and information about the related aquifers. To test this use case web services are being implemented using GWML2 and WaterML2, which is the authoritative standard for water time series observations, in order to serve USGS water well and hydrogeologic data via standard OGC protocols. Furthermore, integration of these data into a computational groundwater flow model will be tested. This submission will present the GWML2 information model and results

  6. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  7. Semantic Web for Manufacturing Web Services

    SciTech Connect

    Kulvatunyou, Boonserm; Ivezic, Nenad

    2002-06-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to rapidly and cost-effectively develop products, production facilities and supporting software is becoming urgent. The use of a virtual enterprise plays a vital role in surviving turbulent markets. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners' services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can widely interoperate in an unambiguous and autonomous manner; hence, virtual enterprise is realizable at a low cost.

  8. The interoperability force in the ERP field

    NASA Astrophysics Data System (ADS)

    Boza, Andrés; Cuenca, Llanos; Poler, Raúl; Michaelides, Zenon

    2015-04-01

    Enterprise resource planning (ERP) systems participate in interoperability projects and this participation sometimes leads to new proposals for the ERP field. The aim of this paper is to identify the role that interoperability plays in the evolution of ERP systems. To go about this, ERP systems have been first identified within interoperability frameworks. Second, the initiatives in the ERP field driven by interoperability requirements have been identified from two perspectives: technological and business. The ERP field is evolving from classical ERP as information system integrators to a new generation of fully interoperable ERP. Interoperability is changing the way of running business, and ERP systems are changing to adapt to the current stream of interoperability.

  9. A Review of Ontologies with the Semantic Web in View.

    ERIC Educational Resources Information Center

    Ding, Ying

    2001-01-01

    Discusses the movement of the World Wide Web from the first generation to the second, called the Semantic Web. Provides an overview of ontology, a philosophical theory about the nature of existence being applied to artificial intelligence that will have a crucial role in enabling content-based access, interoperability, and communication across the…

  10. Extravehicular activity space suit interoperability

    NASA Astrophysics Data System (ADS)

    Skoog, A. Ingemar; McBarron, James W.; Severin, Guy I.

    1995-10-01

    The European Agency (ESA) and the Russian Space Agency (RKA) are jointly developing a new space suit system for improved extravehicular activity (EVA) capabilities in support of the MIR Space Station Programme, the EVA Suit 2000. Recent national policy agreements between the U.S. and Russia on planned cooperations in manned space also include joint extravehicular activity (EVA). With an increased number of space suit systems and a higher operational frequency towards the end of this century an improved interoperability for both routine and emergency operations is of eminent importance. It is thus timely to report the current status of ongoing work on international EVA interoperability being conducted by the Committee on EVA Protocols and Operations of the International Academy of Astronautics initialed in 1991. This paper summarises the current EVA interoperability issues to be harmonised and presents quantified vehicle interface requirements for the current U.S. Shuttle EMU and Russian MIR Orlan DMA and the new European/Russian EVA Suit 2000 extravehicular systems. Major critical/incompatible interfaces for suits/mothercraft of different combinations arc discussed, and recommendations for standardisations given.

  11. Extravehicular activity space suit interoperability.

    PubMed

    Skoog, A I; McBarron JW 2nd; Severin, G I

    1995-10-01

    The European Agency (ESA) and the Russian Space Agency (RKA) are jointly developing a new space suit system for improved extravehicular activity (EVA) capabilities in support of the MIR Space Station Programme, the EVA Suit 2000. Recent national policy agreements between the U.S. and Russia on planned cooperations in manned space also include joint extravehicular activity (EVA). With an increased number of space suit systems and a higher operational frequency towards the end of this century an improved interoperability for both routine and emergency operations is of eminent importance. It is thus timely to report the current status of ongoing work on international EVA interoperability being conducted by the Committee on EVA Protocols and Operations of the International Academy of Astronauts initiated in 1991. This paper summarises the current EVA interoperability issues to be harmonised and presents quantified vehicle interface requirements for the current U.S. Shuttle EMU and Russian MIR Orlan DMA and the new European/Russian EVA Suit 2000 extravehicular systems. Major critical/incompatible interfaces for suits/mother-craft of different combinations are discussed, and recommendations for standardisations given.

  12. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  13. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  14. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web

  15. Vocabulary services to support scientific data interoperability

    NASA Astrophysics Data System (ADS)

    Cox, Simon; Mills, Katie; Tan, Florence

    2013-04-01

    Shared vocabularies are a core element in interoperable systems. Vocabularies need to be available at run-time, and where the vocabularies are shared by a distributed community this implies the use of web technology to provide vocabulary services. Given the ubiquity of vocabularies or classifiers in systems, vocabulary services are effectively the base of the interoperability stack. In contemporary knowledge organization systems, a vocabulary item is considered a concept, with the "terms" denoting it appearing as labels. The Simple Knowledge Organization System (SKOS) formalizes this as an RDF Schema (RDFS) application, with a bridge to formal logic in Web Ontology Language (OWL). For maximum utility, a vocabulary should be made available through the following interfaces: * the vocabulary as a whole - at an ontology URI corresponding to a vocabulary document * each item in the vocabulary - at the item URI * summaries, subsets, and resources derived by transformation * through the standard RDF web API - i.e. a SPARQL endpoint * through a query form for human users. However, the vocabulary data model may be leveraged directly in a standard vocabulary API that uses the semantics provided by SKOS. SISSvoc3 [1] accomplishes this as a standard set of URI templates for a vocabulary. Any URI comforming to the template selects a vocabulary subset based on the SKOS properties, including labels (skos:prefLabel, skos:altLabel, rdfs:label) and a subset of the semantic relations (skos:broader, skos:narrower, etc). SISSvoc3 thus provides a RESTFul SKOS API to query a vocabulary, but hiding the complexity of SPARQL. It has been implemented using the Linked Data API (LDA) [2], which connects to a SPARQL endpoint. By using LDA, we also get content-negotiation, alternative views, paging, metadata and other functionality provided in a standard way. A number of vocabularies have been formalized in SKOS and deployed by CSIRO, the Australian Bureau of Meteorology (BOM) and their

  16. Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna

    2015-04-01

    Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org

  17. National electronic health record interoperability chronology.

    PubMed

    Hufnagel, Stephen P

    2009-05-01

    The federal initiative for electronic health record (EHR) interoperability began in 2000 and set the stage for the establishment of the 2004 Executive Order for EHR interoperability by 2014. This article discusses the chronology from the 2001 e-Government Consolidated Health Informatics (CHI) initiative through the current congressional mandates for an aligned, interoperable, and agile DoD AHLTA and VA VistA.

  18. Interoperability for Space Mission Monitor and Control: Applying Technologies from Manufacturing Automation and Process Control Industries

    NASA Technical Reports Server (NTRS)

    Jones, Michael K.

    1998-01-01

    Various issues associated with interoperability for space mission monitor and control are presented in viewgraph form. Specific topics include: 1) Space Project Mission Operations Control Architecture (SuperMOCA) goals and methods for achieving them; 2) Specifics on the architecture: open standards ad layering, enhancing interoperability, and promoting commercialization; 3) An advertisement; 4) Status of the task - government/industry cooperation and architecture and technology demonstrations; and 5) Key features of messaging services and virtual devices.

  19. A semantically enriched clinical guideline model enabling deployment in heterogeneous healthcare environments.

    PubMed

    Laleci, Gokce B; Dogac, Asuman

    2009-03-01

    Clinical guidelines are developed to assist healthcare practitioners to make decisions on patient's medical problems, and as such they communicate with external applications to retrieve patient data to initiate medical actions through clinical workflows, and transmit information to alert/reminder systems. The interoperability problems in the healthcare information technology domain prevent wider deployment of clinical guidelines because each deployment requires a tedious custom adaptation phase. In this paper, we provide machine-processable mechanisms that express the semantics of clinical guideline interfaces so that automated processes can be used to access the clinical resources for guideline deployment and execution. To be able to deploy the semantically extended guidelines to healthcare settings semiautomatically, the underlying application's semantics must also be available. We describe how this can be achieved based on two prominent implementation technologies in use in the eHealth domain: integrating healthcare enterprise cross-enterprise document sharing integration profile for discovering and exchanging electronic healthcare records and Web service technology for interacting with the clinical workflows and wireless medical sensor devices. The system described in this paper is realized within the scope of the SAPHIRE Project. PMID:19171525

  20. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  1. Using ontological inference and hierarchical matchmaking to overcome semantic heterogeneity in remote sensing-based biodiversity monitoring

    NASA Astrophysics Data System (ADS)

    Nieland, Simon; Kleinschmit, Birgit; Förster, Michael

    2015-05-01

    Ontology-based applications hold promise in improving spatial data interoperability. In this work we use remote sensing-based biodiversity information and apply semantic formalisation and ontological inference to show improvements in data interoperability/comparability. The proposed methodology includes an observation-based, "bottom-up" engineering approach for remote sensing applications and gives a practical example of semantic mediation of geospatial products. We apply the methodology to three different nomenclatures used for remote sensing-based classification of two heathland nature conservation areas in Belgium and Germany. We analysed sensor nomenclatures with respect to their semantic formalisation and their bio-geographical differences. The results indicate that a hierarchical and transparent nomenclature is far more important for transferability than the sensor or study area. The inclusion of additional information, not necessarily belonging to a vegetation class description, is a key factor for the future success of using semantics for interoperability in remote sensing.

  2. 23 CFR 950.7 - Interoperability requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Interoperability requirements. 950.7 Section 950.7 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION INTELLIGENT TRANSPORTATION SYSTEMS ELECTRONIC TOLL COLLECTION § 950.7 Interoperability requirements. (a) For any toll facility operating pursuant to authority under a 1604 toll...

  3. Interoperation Modeling for Intelligent Domotic Environments

    NASA Astrophysics Data System (ADS)

    Bonino, Dario; Corno, Fulvio

    This paper introduces an ontology-based model for domotic device inter-operation. Starting from a previously published ontology (DogOnt) a refactoring and extension is described allowing to explicitly represent device capabilities, states and commands, and supporting abstract modeling of device inter-operation.

  4. Knowledge-oriented semantics modelling towards uncertainty reasoning.

    PubMed

    Mohammed, Abdul-Wahid; Xu, Yang; Liu, Ming

    2016-01-01

    Distributed reasoning in M2M leverages the expressive power of ontology to enable semantic interoperability between heterogeneous systems of connected devices. Ontology, however, lacks the built-in, principled support to effectively handle the uncertainty inherent in M2M application domains. Thus, efficient reasoning can be achieved by integrating the inferential reasoning power of probabilistic representations with the first-order expressiveness of ontology. But there remains a gap with current probabilistic ontologies since state-of-the-art provides no compatible representation for simultaneous handling of discrete and continuous quantities in ontology. This requirement is paramount, especially in smart homes, where continuous quantities cannot be avoided, and simply mapping continuous information to discrete states through quantization can cause a great deal of information loss. In this paper, we propose a hybrid probabilistic ontology that can simultaneously handle distributions over discrete and continuous quantities in ontology. We call this new framework HyProb-Ontology, and it specifies distributions over properties of classes, which serve as templates for instances of classes to inherit as well as overwrite some aspects. Since there cannot be restriction on the dependency topology of models that HyProb-Ontology can induce across different domains, we can achieve a unified Ground Hybrid Probabilistic Model by conditional Gaussian fuzzification of the distributions of the continuous variables in ontology. From the results of our experiments, this unified model can achieve exact inference with better performance over classical Bayesian networks.

  5. Knowledge-oriented semantics modelling towards uncertainty reasoning.

    PubMed

    Mohammed, Abdul-Wahid; Xu, Yang; Liu, Ming

    2016-01-01

    Distributed reasoning in M2M leverages the expressive power of ontology to enable semantic interoperability between heterogeneous systems of connected devices. Ontology, however, lacks the built-in, principled support to effectively handle the uncertainty inherent in M2M application domains. Thus, efficient reasoning can be achieved by integrating the inferential reasoning power of probabilistic representations with the first-order expressiveness of ontology. But there remains a gap with current probabilistic ontologies since state-of-the-art provides no compatible representation for simultaneous handling of discrete and continuous quantities in ontology. This requirement is paramount, especially in smart homes, where continuous quantities cannot be avoided, and simply mapping continuous information to discrete states through quantization can cause a great deal of information loss. In this paper, we propose a hybrid probabilistic ontology that can simultaneously handle distributions over discrete and continuous quantities in ontology. We call this new framework HyProb-Ontology, and it specifies distributions over properties of classes, which serve as templates for instances of classes to inherit as well as overwrite some aspects. Since there cannot be restriction on the dependency topology of models that HyProb-Ontology can induce across different domains, we can achieve a unified Ground Hybrid Probabilistic Model by conditional Gaussian fuzzification of the distributions of the continuous variables in ontology. From the results of our experiments, this unified model can achieve exact inference with better performance over classical Bayesian networks. PMID:27350935

  6. HTML5 microdata as a semantic container for medical information exchange.

    PubMed

    Kimura, Eizen; Kobayashi, Shinji; Ishihara, Ken

    2014-01-01

    Achieving interoperability between clinical electronic medical records (EMR) systems and cloud computing systems is challenging because of the lack of a universal reference method as a standard for information exchange with a secure connection. Here we describe an information exchange scheme using HTML5 microdata, where the standard semantic container is an HTML document. We embed HL7 messages describing laboratory test results in the microdata. We also annotate items in the clinical research report with the microdata. We mapped the laboratory test result data into the clinical research report using an HL7 selector specified in the microdata. This scheme can provide secure cooperation between the cloud-based service and the EMR system.

  7. HTML5 microdata as a semantic container for medical information exchange.

    PubMed

    Kimura, Eizen; Kobayashi, Shinji; Ishihara, Ken

    2014-01-01

    Achieving interoperability between clinical electronic medical records (EMR) systems and cloud computing systems is challenging because of the lack of a universal reference method as a standard for information exchange with a secure connection. Here we describe an information exchange scheme using HTML5 microdata, where the standard semantic container is an HTML document. We embed HL7 messages describing laboratory test results in the microdata. We also annotate items in the clinical research report with the microdata. We mapped the laboratory test result data into the clinical research report using an HL7 selector specified in the microdata. This scheme can provide secure cooperation between the cloud-based service and the EMR system. PMID:25160218

  8. Generative Semantics

    ERIC Educational Resources Information Center

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  9. Political, policy and social barriers to health system interoperability: emerging opportunities of Web 2.0 and 3.0.

    PubMed

    Juzwishin, Donald W M

    2009-01-01

    Achieving effective health informatics interoperability in a fragmented and uncoordinated health system is by definition not possible. Interoperability requires the simultaneous integration of health care processes and information across different types and levels of care (systems thinking). The fundamental argument of this paper is that information system interoperability will remain an unfulfilled hope until health reforms effectively address the governance (accountability), structural and process barriers to interoperability of health care delivery. The ascendency of Web 2.0 and 3.0, although still unproven, signals the opportunity to accelerate patients' access to health information and their health record. Policy suggestions for simultaneously advancing health system delivery and information system interoperability are posited. PMID:20166516

  10. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  11. Before you make the data interoperable you have to make the people interoperable

    NASA Astrophysics Data System (ADS)

    Jackson, I.

    2008-12-01

    In February 2006 a deceptively simple concept was put forward. Could we use the International Year of Planet Earth 2008 as a stimulus to begin the creation of a digital geological map of the planet at a target scale of 1:1 million? Could we design and initiate a project that uniquely mobilises geological surveys around the world to act as the drivers and sustainable data providers of this global dataset? Further, could we synergistically use this geoscientist-friendly vehicle of creating a tangible geological map to accelerate progress of an emerging global geoscience data model and interchange standard? Finally, could we use the project to transfer know-how to developing countries and reduce the length and expense of their learning curve, while at the same time producing geoscience maps and data that could attract interest and investment? These aspirations, plus the chance to generate a global digital geological dataset to assist in the understanding of global environmental problems and the opportunity to raise the profile of geoscience as part of IYPE seemed more than enough reasons to take the proposal to the next stage. In March 2007, in Brighton, UK, 81 delegates from 43 countries gathered together to consider the creation of this global interoperable geological map dataset. The participants unanimously agreed the Brighton "Accord" and kicked off "OneGeology", an initiative that now has the support of more than 85 nations. Brighton was never designed to be a scientific or technical meeting: it was overtly about people and their interaction - would these delegates, with their diverse cultural and technical backgrounds, be prepared to work together to achieve something which, while technically challenging, was not complex in the context of leading edge geoscience informatics. Could we scale up what is a simple informatics model at national level, to deliver global coverage and access? The major challenges for OneGeology (and the deployment of interoperability

  12. Report on the Second Catalog Interoperability Workshop

    NASA Technical Reports Server (NTRS)

    Thieman, James R.; James, Mary E.

    1988-01-01

    The events, resolutions, and recommendations of the Second Catalog Interoperability Workshop, held at JPL in January, 1988, are discussed. This workshop dealt with the issues of standardization and communication among directories, catalogs, and inventories in the earth and space science data management environment. The Directory Interchange Format, being constructed as a standard for the exchange of directory information among participating data systems, is discussed. Involvement in the Interoperability effort by NASA, NOAA, ISGS, and NSF is described, and plans for future interoperability considered. The NASA Master Directory prototype is presented and critiqued and options for additional capabilities debated.

  13. Provenance in Data Interoperability for Multi-Sensor Intercomparison

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Leptoukh, Greg; Berrick, Steve; Shen, Suhung; Prados, Ana; Fox, Peter; Yang, Wenli; Min, Min; Holloway, Dan; Enloe, Yonsook

    2008-01-01

    As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted

  14. Provenance in Data Interoperability for Multi-Sensor Intercomparison

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Leptoukh, G.; Berrick, S.; Shen, S.; Prados, A.; Fox, P.; Yang, W.; Min, M.; Holloway, D.; Enloe, Y.

    2008-12-01

    As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This implies a need for deeper data interoperability than we have now. Many efforts (e.g. OPeNDAP, Open Geospatial Consortium) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross- calibrated, validated, inter-compared and fused. We must determine how to match up data sets that are related, yet different in significant ways: the exact nature of the phenomenon being measured, measurement technique, exact location in space-time, or the quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, the results can be meaningless or even lead to an incorrect interpretation of the data. Most of these distinctions trace back to how the data came to be: sensors, processing, and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio-temporal aggregation, sampling issues, sensor biases, algorithm differences and/or calibration issues. This provenance information must therefore be captured in a semantic framework that allows sophisticated data inter-use tools to incorporate it, and eventually aid in the interpretation of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representations, and data quality representation in a well-structured, machine- readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance- related distinctions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance

  15. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-19

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.

  16. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  17. The DebugIT core ontology: semantic integration of antibiotics resistance patterns.

    PubMed

    Schober, Daniel; Boeker, Martin; Bullenkamp, Jessica; Huszka, Csaba; Depraetere, Kristof; Teodoro, Douglas; Nadah, Nadia; Choquet, Remy; Daniel, Christel; Schulz, Stefan

    2010-01-01

    Antibiotics resistance development poses a significant problem in today's hospital care. Massive amounts of clinical data are being collected and stored in proprietary and unconnected systems in heterogeneous format. The DebugIT EU project promises to make this data geographically and semantically interoperable for case-based knowledge analysis approaches aiming at the discovery of patterns that help to align antibiotics treatment schemes. The semantic glue for this endeavor is DCO, an application ontology that enables data miners to query distributed clinical information systems in a semantically rich and content driven manner. DCO will hence serve as the core component of the interoperability platform for the DebugIT project. Here we present DCO and an approach thet uses the semantic web query language SPARQL to bind and ontologically query hospital database content using DCO and information model mediators. We provide a query example that indicates that ontological querying over heterogeneous information models is feasible via SPARQL construct- and resource mapping queries.

  18. Application-Level Interoperability Across Grids and Clouds

    NASA Astrophysics Data System (ADS)

    Jha, Shantenu; Luckow, Andre; Merzky, Andre; Erdely, Miklos; Sehgal, Saurabh

    Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data, multiple sources of data as well as resource types. The primary aim of this chapter is to understand different ways in which application-level interoperability can be provided across distributed infrastructure. We achieve this by (i) using the canonical wordcount application, based on an enhanced version of MapReduce that scales-out across clusters, clouds, and HPC resources, (ii) establishing how SAGA enables the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently, and (iii) demonstrating the scale-out of ensemble-based biomolecular simulations across multiple resources. We show user-level control of the relative placement of compute and data and also provide simple performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. Finally, we discuss Azure and some of the system-level abstractions that it provides and show how it is used to support ensemble-based biomolecular simulations.

  19. Live Social Semantics

    NASA Astrophysics Data System (ADS)

    Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain

    Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.

  20. River Basin Standards Interoperability Pilot

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Masó, Joan; Stasch, Christoph

    2016-04-01

    There is a lot of water information and tools in Europe to be applied in the river basin management but fragmentation and a lack of coordination between countries still exists. The European Commission and the member states have financed several research and innovation projects in support of the Water Framework Directive. Only a few of them are using the recently emerging hydrological standards, such as the OGC WaterML 2.0. WaterInnEU is a Horizon 2020 project focused on creating a marketplace to enhance the exploitation of EU funded ICT models, tools, protocols and policy briefs related to water and to establish suitable conditions for new market opportunities based on these offerings. One of WaterInnEU's main goals is to assess the level of standardization and interoperability of these outcomes as a mechanism to integrate ICT-based tools, incorporate open data platforms and generate a palette of interchangeable components that are able to use the water data emerging from the recently proposed open data sharing processes and data models stimulated by initiatives such as the INSPIRE directive. As part of the standardization and interoperability activities in the project, the authors are designing an experiment (RIBASE, the present work) to demonstrate how current ICT-based tools and water data can work in combination with geospatial web services in the Scheldt river basin. The main structure of this experiment, that is the core of the present work, is composed by the following steps: - Extraction of information from river gauges data in OGC WaterML 2.0 format using SOS services (preferably compliant to the OGC SOS 2.0 Hydrology Profile Best Practice). - Model floods using a WPS 2.0, WaterML 2.0 data and weather forecast models as input. - Evaluation of the applicability of Sensor Notification Services in water emergencies. - Open distribution of the input and output data as OGC web services WaterML, / WCS / WFS and with visualization utilities: WMS. The architecture

  1. Towards Model Driven Tool Interoperability: Bridging Eclipse and Microsoft Modeling Tools

    NASA Astrophysics Data System (ADS)

    Brunelière, Hugo; Cabot, Jordi; Clasen, Cauê; Jouault, Frédéric; Bézivin, Jean

    Successful application of model-driven engineering approaches requires interchanging a lot of relevant data among the tool ecosystem employed by an engineering team (e.g., requirements elicitation tools, several kinds of modeling tools, reverse engineering tools, development platforms and so on). Unfortunately, this is not a trivial task. Poor tool interoperability makes data interchange a challenge even among tools with a similar scope. This paper presents a model-based solution to overcome such interoperability issues. With our approach, the internal schema/s (i.e., metamodel/s) of each tool are explicited and used as basis for solving syntactic and semantic differences between the tools. Once the corresponding metamodels are aligned, model-to-model transformations are (semi)automatically derived and executed to perform the actual data interchange. We illustrate our approach by bridging the Eclipse and Microsoft (DSL Tools and SQL Server Modeling) modeling tools.

  2. Diabetes Device Interoperability for Improved Diabetes Management

    PubMed Central

    Silk, Alain D.

    2015-01-01

    Scientific and technological advancements have led to the increasing availability and use of sophisticated devices for diabetes management, with corresponding improvements in public health. These devices are often capable of sharing data with a few other specific devices but are generally not broadly interoperable; they cannot work together with a wide variety of other devices. As a result of limited interoperability, benefits of modern diabetes devices and potential for development of innovative new diabetes technologies are not being fully realized. Here we discuss diabetes device interoperability in general, then focus on 4 examples that show how diabetes management could benefit from enhanced interoperability: remote monitoring and data sharing, integrating data from multiple devices to better inform diabetes management strategies, device consolidation, and artificial pancreas development. PMID:26178738

  3. Reminiscing about 15 years of interoperability efforts

    DOE PAGES

    Van de Sompel, Herbert; Nelson, Michael L.

    2015-11-01

    Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. In addition, we lament the lack of interoperability across nodes thatmore » play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.« less

  4. Reminiscing about 15 years of interoperability efforts

    SciTech Connect

    Van de Sompel, Herbert; Nelson, Michael L.

    2015-11-01

    Over the past fifteen years, our perspective on tackling information interoperability problems for web-based scholarship has evolved significantly. In this opinion piece, we look back at three efforts that we have been involved in that aptly illustrate this evolution: OAI-PMH, OAI-ORE, and Memento. Understanding that no interoperability specification is neutral, we attempt to characterize the perspectives and technical toolkits that provided the basis for these endeavors. With that regard, we consider repository-centric and web-centric interoperability perspectives, and the use of a Linked Data or a REST/HATEAOS technology stack, respectively. In addition, we lament the lack of interoperability across nodes that play a role in web-based scholarship, but end on a constructive note with some ideas regarding a possible path forward.

  5. A Programme for Semantics; Semantics and Its Critics; Semantics Shamantics.

    ERIC Educational Resources Information Center

    Goldstein, Laurence; Harris, Roy

    1990-01-01

    In a statement-response-reply format, a proposition concerning the study of semantics is made and debated in three papers by two authors. In the first paper, it is proposed that semantics is not the study of the concept of meaning, but rather a neurolinguistic issue, despite the fact that semantics is linked to context. It is argued that semantic…

  6. Scalability and interoperability within glideinWMS

    NASA Astrophysics Data System (ADS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.

    2010-04-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  7. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  8. Forcing Interoperability: An Intentionally Fractured Approach

    NASA Astrophysics Data System (ADS)

    Gallaher, D. W.; Brodzik, M.; Scambos, T.; Stroeve, J.

    2008-12-01

    The NSIDC is attempting to rebuild a significant portion of its public-facing cyberinfrastructure to better meet the needs expressed by the cryospheric community. The project initially addresses a specific science need - understanding Greenland's contribution to global sea level rise through comparison and analysis of variables such as temperature, albedo, melt, ice velocity and surface elevation. This project will ultimately be expanded to cover most of NSIDC's cryospheric data. Like many organizations, we need to provide users with data discovery interfaces, collaboration tools and mapping services. Complicating this effort is the need to reduce the volume of raw data delivered to the user. Data growth, especially with time-series data, will overwhelm our software, processors and network like never before. We need to provide the users the ability to perform first level analysis directly on our site. In order to accomplish this, the users should be free to modify the behavior of these tools as well as incorporate their own tools and analysis to meet their needs. Rather than building one monolithic project to build this system, we have chosen to build three semi-independent systems. One team is building a data discovery and web based distribution system, the second is building an advanced analysis and workflow system and the third is building a customized web mapping service. These systems will use the same underlying data structures and services but will employ different technologies and teams to build their objectives, schedules and user interfaces. Obviously, we are adding complexity and risk to the overall project however this may be the best method to achieve interoperability because the development teams will be required to build off each others work. The teams will be forced to design with other users in mind as opposed to building interoperability as an afterthought, which a tendency in monolithic systems. All three teams will take advantage of preexisting

  9. GEOSS interoperability for Weather, Ocean and Water

    NASA Astrophysics Data System (ADS)

    Richardson, David; Nyenhuis, Michael; Zsoter, Ervin; Pappenberger, Florian

    2013-04-01

    "Understanding the Earth system — its weather, climate, oceans, atmosphere, water, land, geodynamics, natural resources, ecosystems, and natural and human-induced hazards — is crucial to enhancing human health, safety and welfare, alleviating human suffering including poverty, protecting the global environment, reducing disaster losses, and achieving sustainable development. Observations of the Earth system constitute critical input for advancing this understanding." With this in mind, the Group on Earth Observations (GEO) started implementing the Global Earth Observation System of Systems (GEOSS). GEOWOW, short for "GEOSS interoperability for Weather, Ocean and Water", is supporting this objective. GEOWOW's main challenge is to improve Earth observation data discovery, accessibility and exploitability, and to evolve GEOSS in terms of interoperability, standardization and functionality. One of the main goals behind the GEOWOW project is to demonstrate the value of the TIGGE archive in interdisciplinary applications, providing a vast amount of useful and easily accessible information to the users through the GEO Common Infrastructure (GCI). GEOWOW aims at developing funcionalities that will allow easy discovery, access and use of TIGGE archive data and of in-situ observations, e.g. from the Global Runoff Data Centre (GRDC), to support applications such as river discharge forecasting.TIGGE (THORPEX Interactive Grand Global Ensemble) is a key component of THORPEX: a World Weather Research Programme to accelerate the improvements in the accuracy of 1-day to 2 week high-impact weather forecasts for the benefit of humanity. The TIGGE archive consists of ensemble weather forecast data from ten global NWP centres, starting from October 2006, which has been made available for scientific research. The TIGGE archive has been used to analyse hydro-meteorological forecasts of flooding in Europe as well as in China. In general the analysis has been favourable in terms of

  10. Semantic Clustering of Search Engine Results.

    PubMed

    Soliman, Sara Saad; El-Sayed, Maged F; Hassan, Yasser F

    2015-01-01

    This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.

  11. Semantic Clustering of Search Engine Results

    PubMed Central

    Soliman, Sara Saad; El-Sayed, Maged F.; Hassan, Yasser F.

    2015-01-01

    This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision. PMID:26933673

  12. Maturity Model for Advancing Smart Grid Interoperability

    SciTech Connect

    Knight, Mark; Widergren, Steven E.; Mater, J.; Montgomery, Austin

    2013-10-28

    Abstract—Interoperability is about the properties of devices and systems to connect and work properly. Advancing interoperability eases integration and maintenance of the resulting interconnection. This leads to faster integration, lower labor and component costs, predictability of projects and the resulting performance, and evolutionary paths for upgrade. When specifications are shared and standardized, competition and novel solutions can bring new value streams to the community of stakeholders involved. Advancing interoperability involves reaching agreement for how things join at their interfaces. The quality of the agreements and the alignment of parties involved in the agreement present challenges that are best met with process improvement techniques. The GridWise® Architecture Council (GWAC) sponsored by the United States Department of Energy is supporting an effort to use concepts from capability maturity models used in the software industry to advance interoperability of smart grid technology. An interoperability maturity model has been drafted and experience is being gained through trials on various types of projects and community efforts. This paper describes the value and objectives of maturity models, the nature of the interoperability maturity model and how it compares with other maturity models, and experiences gained with its use.

  13. Semantic Web Service Composition in Social Environments

    NASA Astrophysics Data System (ADS)

    Kuter, Ugur; Golbeck, Jennifer

    This paper describes how to generate compositions of semantic Web services using social trust information from user ratings of the services. We present a taxonomy of features, such as interoperability, availability, privacy, security, and others. We describe a way to compute social trust in OWL-S style semantic Web services. Our formalism exploits the users' ratings of the services and execution characteristics of those services. We describe our service-composition algorithm, called Trusty, that is based on this formalism. We discuss the formal properties of Trusty and our implementation of the algorithm. We present our experiments in which we compared Trusty with SHOP2, a well-known AI planning algorithm that has been successfully used for OWL-S style service composition. Our results demonstrate that Trusty generates more trustworthy compositions than SHOP2.

  14. Grid-enabled Web Services for Geospatial Interoperability

    NASA Astrophysics Data System (ADS)

    Chen, A.; di, L.; Bai, Y.; Wei, Y.

    2006-05-01

    of geospatial data, and more than 20 mandatory elements needed for supporting various methods of geospatial service chaining defined in ISO 19119. The metadata model for data types and service types are proposed and integrated with CSW model to facilitate the service interoperability and chaining. Secondly, an intelligent Grid Service Mediator (iGSM) was developed to extend the other geospatial Web services (WCS, WMS etc.) to the Grid environment for geospatial service interoperability. Any request for geospatial services for users is completed through the request and response operations of geospatial services. Therefore, Grid Security Infrastructure (GSI) is combined with geospatial services to verify the validity of the requester. When the requester is valid, the service request will be sent to iGSM for fulfilling the request. IGSM then securely queries the Grid-enabled CSW service to determine where the required service and data are located and whether or not it is necessary for the service to interact with other services, and then invokes the service. If required data and service are at remote location, iGSM will manage the data and result traffics between machines. With implementation of GCSW, iGSM, and a number of Grid-enabled OGC geospatial services, we have successfully extended the application of Grid technology to the EO community for geospatial interoperability. Our experience with the project indicates that Grid technology has great potential for applications in the geospatial discipline. By using Grid technology as the foundation for geospatial data infrastructure, we achieved secure sharing of geospatial data and services while providing common OGC interfaces to users.

  15. Data interoperability software solution for emergency reaction in the Europe Union

    NASA Astrophysics Data System (ADS)

    Casado, R.; Rubiera, E.; Sacristan, M.; Schütte, F.; Peters, R.

    2015-07-01

    Emergency management becomes more challenging in international crisis episodes because of cultural, semantic and linguistic differences between all stakeholders, especially first responders. Misunderstandings between first responders makes decision making slower and more difficult. However, spread and development of networks and IT-based emergency management systems (EMSs) have improved emergency responses, which have become more coordinated. Despite improvements made in recent years, EMSs have not still solved problems related to cultural, semantic and linguistic differences which are the real cause of slower decision making. In addition, from a technical perspective, the consolidation of current EMSs and the different formats used to exchange information offers another problem to be solved in any solution proposed for information interoperability between heterogeneous EMSs in different contexts. To overcome these problems, we present a software solution based on semantic and mediation technologies. EMERGency ELements (EMERGEL) (Fundacion CTIC and AntwortING Ingenieurbüro PartG, 2013), a common and modular ontology shared by all the stakeholders, has been defined. It offers the best solution to gather all stakeholders' knowledge in a unique and flexible data model, taking into account different countries' cultural and linguistic issues. To deal with the diversity of data protocols and formats, we have designed a service-oriented architecture for data interoperability (named DISASTER: Data Interoperability Solution At STakeholders Emergency Reaction) providing a flexible extensible solution to solve the mediation issues. Web services have been adopted as specific technology to implement this paradigm that has the most significant academic and industrial visibility and attraction. Contributions of this work have been validated through the design and development of a cross-border realistic prototype scenario, actively involving both emergency managers and emergency

  16. Semantic Web Service Framework to Intelligent Distributed Manufacturing

    SciTech Connect

    Kulvatunyou, Boonserm

    2005-12-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to develop products, production facilities, and supporting software rapidly and cost-effectively is becoming urgent. The use of a loosely integrated virtual enterprise based framework holds the potential of surviving changing market needs. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer that may or may not have prior relationship by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can interoperate widely in an unambiguous and autonomous manner. This contributes towards the realization of virtual enterprises at a low cost.

  17. European Interoperability Assets Register and Quality Framework Implementation.

    PubMed

    Moreno-Conde, Alberto; Thienpont, Geert; Lamote, Inge; Coorevits, Pascal; Parra, Carlos; Kalra, Dipak

    2016-01-01

    Interoperability assets is the term applied to refer to any resource that can support the design, implementation and successful adoption of eHealth services that can exchange data meaningfully. Some examples may include functional requirements, specifications, standards, clinical models and term lists, guidance on how standards may be used concurrently, implementation guides, educational resources, and other resources. Unfortunately, these are largely accessible in ad hoc ways and result in scattered fragments of a solution space that urgently need to be brought together. At present, it is well known that new initiatives and projects will reinvent assets of which they were unaware, while those assets which were potentially of great value are forgotten, not maintained and eventually fall into disuse. This research has defined a quality in use model and assessed the suitability of this quality framework based on the feedback and opinion of a representative sample of potential end users. This quality framework covers the following domains of asset development and adoption: (i) Development process, (ii) Maturity level, (iii) Trustworthiness, (iv) Support & skills, (v) Sustainability, (vi) Semantic interoperability, (vii) Cost & effort of adoption (viii) Maintenance. When participants were requested to evaluate how the overall quality in use framework, 70% would recommend using the register to their colleagues, 70% felt that it could provide relevant benefits for discovering new assets, and 50% responded that it would support their decision making about the recommended asset to adopt or implement in their organisation. Several European projects have expressed interest in using the register, which will now be sustained and promoted by the the European Institute for Innovation through Health Data. PMID:27577473

  18. Provenance-Based Approaches to Semantic Web Service Discovery and Usage

    ERIC Educational Resources Information Center

    Narock, Thomas William

    2012-01-01

    The World Wide Web Consortium defines a Web Service as "a software system designed to support interoperable machine-to-machine interaction over a network." Web Services have become increasingly important both within and across organizational boundaries. With the recent advent of the Semantic Web, web services have evolved into semantic…

  19. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  20. Putting semantics into the semantic web: how well can it capture biology?

    PubMed

    Kazic, Toni

    2006-01-01

    Could the Semantic Web work for computations of biological interest in the way it's intended to work for movie reviews and commercial transactions? It would be wonderful if it could, so it's worth looking to see if its infrastructure is adequate to the job. The technologies of the Semantic Web make several crucial assumptions. I examine those assumptions; argue that they create significant problems; and suggest some alternative ways of achieving the Semantic Web's goals for biology.

  1. Code lists for interoperability - Principles and best practices in INSPIRE

    NASA Astrophysics Data System (ADS)

    Lutz, M.; Portele, C.; Cox, S.; Murray, K.

    2012-04-01

    external vocabulary. In the former case, for each value, an external identifier, one or more labels (possibly in different languages), a definition and other metadata should be specified. In the latter case, the external vocabulary should be characterised, e.g. by specifying the version to be used, the format(s) in which the vocabulary is available, possible constraints (e.g. if only as specific part of the external list is to be used), rules for using values in the encoding of instance data, and the maintenance rules applied to the external vocabulary. This information is crucial for enabling implementation and interoperability in distributed systems (such as SDIs) and should be made available through a code list registry. While thus the information on allowed code list values is usually managed outside the UML application schema, we recommend inclusion of «codeList»-stereotyped classes in the model for semantic clarity. Information on the obligation, extensibility and a reference to the specified values should be provided through tagged values. Acknowledgements: The authors would like to thank the INSPIRE Thematic Working Groups, the Data Specifications Drafting Team and the JRC Contact Points for their contributions to the discussions on code lists in INSPIRE and to this abstract.

  2. A web services choreography scenario for interoperating bioinformatics applications

    PubMed Central

    de Knikker, Remko; Guo, Youjun; Li, Jin-long; Kwan, Albert KH; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi

    2004-01-01

    Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web

  3. Building biomedical web communities using a semantically aware content management system.

    PubMed

    Das, Sudeshna; Girard, Lisa; Green, Tom; Weitzman, Louis; Lewis-Bowen, Alister; Clark, Tim

    2009-03-01

    Web-based biomedical communities are becoming an increasingly popular vehicle for sharing information amongst researchers and are fast gaining an online presence. However, information organization and exchange in such communities is usually unstructured, rendering interoperability between communities difficult. Furthermore, specialized software to create such communities at low cost-targeted at the specific common information requirements of biomedical researchers-has been largely lacking. At the same time, a growing number of biological knowledge bases and biomedical resources are being structured for the Semantic Web. Several groups are creating reference ontologies for the biomedical domain, actively publishing controlled vocabularies and making data available in Resource Description Framework (RDF) language. We have developed the Science Collaboration Framework (SCF) as a reusable platform for advanced structured online collaboration in biomedical research that leverages these ontologies and RDF resources. SCF supports structured 'Web 2.0' style community discourse amongst researchers, makes heterogeneous data resources available to the collaborating scientist, captures the semantics of the relationship among the resources and structures discourse around the resources. The first instance of the SCF framework is being used to create an open-access online community for stem cell research-StemBook (http://www.stembook.org). We believe that such a framework is required to achieve optimal productivity and leveraging of resources in interdisciplinary scientific research. We expect it to be particularly beneficial in highly interdisciplinary areas, such as neurodegenerative disease and neurorepair research, as well as having broad utility across the natural sciences.

  4. SOLE: Applying Semantics and Social Web to Support Technology Enhanced Learning in Software Engineering

    NASA Astrophysics Data System (ADS)

    Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja

    eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.

  5. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  6. 47 CFR 64.621 - Interoperability and portability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... technologies and their video communication service platforms are interoperable with the VRS Access Technology... use involving their VRS access technologies or video communication service platforms that are not... VRS access technologies and their video communication service platforms are interoperable with...

  7. A Semantic Web Blackboard System

    NASA Astrophysics Data System (ADS)

    McKenzie, Craig; Preece, Alun; Gray, Peter

    In this paper, we propose a Blackboard Architecture as a means for coordinating hybrid reasoning over the Semantic Web. We describe the components of traditional blackboard systems (Knowledge Sources, Blackboard, Controller) and then explain how we have enhanced these by incorporating some of the principles of the Semantic Web to pro- duce our Semantic Web Blackboard. Much of the framework is already in place to facilitate our research: the communication protocol (HTTP); the data representation medium (RDF); a rich expressive description language (OWL); and a method of writing rules (SWRL). We further enhance this by adding our own constraint based formalism (CIF/SWRL) into the mix. We provide an example walk-though of our test-bed system, the AKTive Workgroup Builder and Blackboard(AWB+B), illustrating the interaction and cooperation of the Knowledge Sources and providing some context as to how the solution is achieved. We conclude with the strengths and weaknesses of the architecture.

  8. The MED-SUV Multidisciplinary Interoperability Infrastructure

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; D'Auria, Luca; Reitano, Danilo; Papeschi, Fabrizio; Roncella, Roberto; Puglisi, Giuseppe; Nativi, Stefano

    2016-04-01

    In accordance with the international Supersite initiative concept, the MED-SUV (MEDiterranean SUpersite Volcanoes) European project (http://med-suv.eu/) aims to enable long-term monitoring experiment in two relevant geologically active regions of Europe prone to natural hazards: Mt. Vesuvio/Campi Flegrei and Mt. Etna. This objective requires the integration of existing components, such as monitoring systems and data bases and novel sensors for the measurements of volcanic parameters. Moreover, MED-SUV is also a direct contribution to the Global Earth Observation System of Systems (GEOSS) as one the volcano Supersites recognized by the Group on Earth Observation (GEO). To achieve its goal, MED-SUV set up an advanced e-infrastructure allowing the discovery of and access to heterogeneous data for multidisciplinary applications, and the integration with external systems like GEOSS. The MED-SUV overall infrastructure is conceived as a three layer architecture with the lower layer (Data level) including the identified relevant data sources, the mid-tier (Supersite level) including components for mediation and harmonization , and the upper tier (Global level) composed of the systems that MED-SUV must serve, such as GEOSS and possibly other global/community systems. The Data level is mostly composed of existing data sources, such as space agencies satellite data archives, the UNAVCO system, the INGV-Rome data service. They share data according to different specifications for metadata, data and service interfaces, and cannot be changed. Thus, the only relevant MED-SUV activity at this level was the creation of a MED-SUV local repository based on Web Accessible Folder (WAF) technology, deployed in the INGV site in Catania, and hosting in-situ data and products collected and generated during the project. The Supersite level is at the core of the MED-SUV architecture, since it must mediate between the disparate data sources in the layer below, and provide a harmonized view to

  9. Semantic Context Detection Using Audio Event Fusion

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Ta; Cheng, Wen-Huang; Wu, Ja-Ling

    2006-12-01

    Semantic-level content analysis is a crucial issue in achieving efficient content retrieval and management. We propose a hierarchical approach that models audio events over a time series in order to accomplish semantic context detection. Two levels of modeling, audio event and semantic context modeling, are devised to bridge the gap between physical audio features and semantic concepts. In this work, hidden Markov models (HMMs) are used to model four representative audio events, that is, gunshot, explosion, engine, and car braking, in action movies. At the semantic context level, generative (ergodic hidden Markov model) and discriminative (support vector machine (SVM)) approaches are investigated to fuse the characteristics and correlations among audio events, which provide cues for detecting gunplay and car-chasing scenes. The experimental results demonstrate the effectiveness of the proposed approaches and provide a preliminary framework for information mining by using audio characteristics.

  10. An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies

    PubMed Central

    Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos

    2016-01-01

    Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451

  11. An Interoperability Platform Enabling Reuse of Electronic Health Records for Signal Verification Studies.

    PubMed

    Yuksel, Mustafa; Gonul, Suat; Laleci Erturkmen, Gokce Banu; Sinaci, Ali Anil; Invernizzi, Paolo; Facchinetti, Sara; Migliavacca, Andrea; Bergvall, Tomas; Depraetere, Kristof; De Roo, Jos

    2016-01-01

    Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information. PMID:27123451

  12. Challenges of interoperability using HL7 v3 in Czech healthcare.

    PubMed

    Nagy, Miroslav; Preckova, Petra; Seidl, Libor; Zvarova, Jana

    2010-01-01

    The paper describes several classification systems that could improve patient safety through semantic interoperability among contemporary electronic health record systems (EHR-Ss) with support of the HL7 v3 standard. We describe a proposal and a pilot implementation of a semantic interoperability platform (SIP) interconnecting current EHR-Ss by using HL7 v3 messages and concepts mappings on most widely used classification systems. The increasing number of classification systems and nomenclatures requires designing of various conversion tools for transfer between main classification systems. We present the so-called LIM filler module and the HL7 broker, which are parts of the SIP, playing the role of such conversion tools. The analysis of suitability and usability of individual terminological thesauri has been started by mapping of clinical contents of the Minimal Data Model for Cardiology (MDMC) to various terminological classification systems. A national-wide implementation of the SIP would include adopting and translating international coding systems and nomenclatures, and developing implementation guidelines facilitating the migration from national standards to international ones. Our research showed that creation of such a platform is feasible; however, it will require a huge effort to adapt fully the Czech healthcare system to the European environment. PMID:20543319

  13. WS/PIDS: standard interoperable PIDS in web services environments.

    PubMed

    Vasilescu, E; Dorobanţu, M; Govoni, S; Padh, S; Mun, S K

    2008-01-01

    An electronic health record depends on the consistent handling of people's identities within and outside healthcare organizations. Currently, the Person Identification Service (PIDS), a CORBA specification, is the only well-researched standard that meets these needs. In this paper, we introduce WS/PIDS, a PIDS specification for Web Services (WS) that closely matches the original PIDS and improves on it by providing explicit support for medical multimedia attributes. WS/PIDS is currently supported by a test implementation, layered on top of a PIDS back-end, with Java- and NET-based, and Web clients. WS/PIDS is interoperable among platforms; it preserves PIDS semantics to a large extent, and it is intended to be fully compliant with established and emerging WS standards. The specification is open source and immediately usable in dynamic clinical systems participating in grid environments. WS/PIDS has been tested successfully with a comprehensive set of use cases, and it is being used in a clinical research setting.

  14. Gazetteer Brokering through Semantic Mediation

    NASA Astrophysics Data System (ADS)

    Hobona, G.; Bermudez, L. E.; Brackin, R.

    2013-12-01

    A gazetteer is a geographical directory containing some information regarding places. It provides names, location and other attributes for places which may include points of interest (e.g. buildings, oilfields and boreholes), and other features. These features can be published via web services conforming to the Gazetteer Application Profile of the Web Feature Service (WFS) standard of the Open Geospatial Consortium (OGC). Against the backdrop of advances in geophysical surveys, there has been a significant increase in the amount of data referenced to locations. Gazetteers services have played a significant role in facilitating access to such data, including through provision of specialized queries such as text, spatial and fuzzy search. Recent developments in the OGC have led to advances in gazetteers such as support for multilingualism, diacritics, and querying via advanced spatial constraints (e.g. search by radial search and nearest neighbor). A challenge remaining however, is that gazetteers produced by different organizations have typically been modeled differently. Inconsistencies from gazetteers produced by different organizations may include naming the same feature in a different way, naming the attributes differently, locating the feature in a different location, and providing fewer or more attributes than the other services. The Gazetteer application profile of the WFS is a starting point to address such inconsistencies by providing a standardized interface based on rules specified in ISO 19112, the international standard for spatial referencing by geographic identifiers. The profile, however, does not provide rules to deal with semantic inconsistencies. The USGS and NGA commissioned research into the potential for a Single Point of Entry Global Gazetteer (SPEGG). The research was conducted by the Cross Community Interoperability thread of the OGC testbed, referenced OWS-9. The testbed prototyped approaches for brokering gazetteers through use of semantic

  15. Parallel mesh management using interoperable tools.

    SciTech Connect

    Tautges, Timothy James; Devine, Karen Dragon

    2010-10-01

    This presentation included a discussion of challenges arising in parallel mesh management, as well as demonstrated solutions. They also described the broad range of software for mesh management and modification developed by the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) team, and highlighted applications successfully using the ITAPS tool suite.

  16. Smart Grid Interoperability Maturity Model Beta Version

    SciTech Connect

    Widergren, Steven E.; Drummond, R.; Giroti, Tony; Houseman, Doug; Knight, Mark; Levinson, Alex; longcore, Wayne; Lowe, Randy; Mater, J.; Oliver, Terry V.; Slack, Phil; Tolk, Andreas; Montgomery, Austin

    2011-12-02

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across an information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.

  17. Specific interoperability problems of security infrastructure services.

    PubMed

    Pharow, Peter; Blobel, Bernd

    2006-01-01

    Communication and co-operation in healthcare and welfare require a well-defined set of security services based on a standards-based interoperable security infrastructure and provided by a Trusted Third Party. Generally, the services describe status and relation of communicating principals, corresponding keys and attributes, and the access rights to both applications and data. Legal, social, behavioral and ethical requirements demand securely stored patient information and well-established access tools and tokens. Electronic signatures as means for securing integrity of messages and files, certified time stamps and time signatures are important for accessing and storing data in Electronic Health Record Systems. The key for all these services is a secure and reliable procedure for authentication (identification and verification). While mentioning technical problems (e.g. lifetime of the storage devices, migration of retrieval and presentation software), this paper aims at identifying harmonization and interoperability requirements of securing data items, files, messages, sets of archived items or documents, and life-long Electronic Health Records based on a secure certificate-based identification. It's commonly known that just relying on existing and emerging security standards does not necessarily guarantee interoperability of different security infrastructure approaches. So certificate separation can be a key to modern interoperable security infrastructure services.

  18. EVA safety: Space suit system interoperability

    NASA Technical Reports Server (NTRS)

    Skoog, A. I.; McBarron, J. W.; Abramov, L. P.; Zvezda, A. O.

    1995-01-01

    The results and the recommendations of the International Academy of Astronautics extravehicular activities (IAA EVA) Committee work are presented. The IAA EVA protocols and operation were analyzed for harmonization procedures and for the standardization of safety critical and operationally important interfaces. The key role of EVA and how to improve the situation based on the identified EVA space suit system interoperability deficiencies were considered.

  19. Interoperability Outlook in the Big Data Future

    NASA Astrophysics Data System (ADS)

    Kuo, K. S.; Ramachandran, R.

    2015-12-01

    The establishment of distributed active archive centers (DAACs) as data warehouses and the standardization of file format by NASA's Earth Observing System Data Information System (EOSDIS) had doubtlessly propelled interoperability of NASA Earth science data to unprecedented heights in the 1990s. However, we obviously still feel wanting two decades later. We believe the inadequate interoperability we experience is a result of the the current practice that data are first packaged into files before distribution and only the metadata of these files are cataloged into databases and become searchable. Data therefore cannot be efficiently filtered. Any extensive study thus requires downloading large volumes of data files to a local system for processing and analysis.The need to download data not only creates duplication and inefficiency but also further impedes interoperability, because the analysis has to be performed locally by individual researchers in individual institutions. Each institution or researcher often has its/his/her own preference in the choice of data management practice as well as programming languages. Analysis results (derived data) so produced are thus subject to the differences of these practices, which later form formidable barriers to interoperability. A number of Big Data technologies are currently being examined and tested to address Big Earth Data issues. These technologies share one common characteristics: exploiting compute and storage affinity to more efficiently analyze large volumes and great varieties of data. Distributed active "archive" centers are likely to evolve into distributed active "analysis" centers, which not only archive data but also provide analysis service right where the data reside. "Analysis" will become the more visible function of these centers. It is thus reasonable to expect interoperability to improve because analysis, in addition to data, becomes more centralized. Within a "distributed active analysis center

  20. Enhancing Data Interoperability with Web Services

    NASA Astrophysics Data System (ADS)

    Shrestha, S. R.; Zimble, D. A.; Wang, W.; Herring, D.; Halpert, M.

    2014-12-01

    In an effort to improve data access and interoperability of climate and weather data, the National Oceanic and Atmospheric Administration's (NOAA) Climate.gov and Climate Prediction Center (CPC) are exploring various platform solutions to enhance a user's ability to locate, preview, and acquire the data. The Climate.gov and CPC data team faces multiple challenges including the various kinds of data and formats, inconsistency of metadata records, variety of data service implementations, very large volumes of data and geographically distributed locations. We have created the Data Access and Interoperability project to design a web-based platform, where interoperability between systems can be leveraged to allow greater data discovery, access, visualization and delivery. In the interoperable data platform, systems can integrate with each other to support the synthesis of climate and weather data. Interoperability is the ability for users to discover the available climate and weather data, preview and interact with the data, and acquire the data in common digital formats through a simple web-based interface. The goal of the interoperable data platform is to leverage existing web services, implement the established standards and integrate with existing solutions across the earth sciences domain instead of creating new technologies. Towards this effort to improve the interoperability of the platform, we are collaborating with ESRI Inc. to provide climate and weather data via web services. In this presentation, we will discuss and demonstrate how to use ArcGIS to author RESTful based scientific web services using open standards. These web services are able to encapsulate the logic required to handle and describe scientific data through a variety of service types including, image, map, feature, geoprocessing, and their respective service methods. Combining these types of services and leveraging well-documented APIs, including the ArcGIS JavaScript API, we can afford to

  1. Social Semantics for an Effective Enterprise

    NASA Technical Reports Server (NTRS)

    Berndt, Sarah; Doane, Mike

    2012-01-01

    An evolution of the Semantic Web, the Social Semantic Web (s2w), facilitates knowledge sharing with "useful information based on human contributions, which gets better as more people participate." The s2w reaches beyond the search box to move us from a collection of hyperlinked facts, to meaningful, real time context. When focused through the lens of Enterprise Search, the Social Semantic Web facilitates the fluid transition of meaningful business information from the source to the user. It is the confluence of human thought and computer processing structured with the iterative application of taxonomies, folksonomies, ontologies, and metadata schemas. The importance and nuances of human interaction are often deemphasized when focusing on automatic generation of semantic markup, which results in dissatisfied users and unrealized return on investment. Users consistently qualify the value of information sets through the act of selection, making them the de facto stakeholders of the Social Semantic Web. Employers are the ultimate beneficiaries of s2w utilization with a better informed, more decisive workforce; one not achieved with an IT miracle technology, but by improved human-computer interactions. Johnson Space Center Taxonomist Sarah Berndt and Mike Doane, principal owner of Term Management, LLC discuss the planning, development, and maintenance stages for components of a semantic system while emphasizing the necessity of a Social Semantic Web for the Enterprise. Identification of risks and variables associated with layering the successful implementation of a semantic system are also modeled.

  2. THE Interoperability Challenge for the Geosciences: Stepping up from Interoperability between Disciplinary Siloes to Creating Transdisciplinary Data Platforms.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Trenham, C.; Druken, K. A.; Wang, J.

    2015-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) has collocated over 10 PB of national and international data assets within a HPC facility to create the National Environmental Research Data Interoperability Platform (NERDIP). The data span a wide range of fields from the earth systems and environment (climate, coasts, oceans, and geophysics) through to astronomy, bioinformatics, and the social sciences. These diverse data collections are collocated on a major data storage node that is linked to a Petascale HPC and Cloud facility. Users can search across all of the collections and either log in and access the data directly, or they can access the data via standards-based web services. These collocated petascale data collections are theoretically a massive resource for interdisciplinary science at scales and resolutions never hitherto possible. But once collocated, multiple barriers became apparent that make cross-domain data integration very difficult and often so time consuming, that either less ambitious research goals are attempted or the project is abandoned. Incompatible content is only one half of the problem: other showstoppers are differing access models, licences and issues of ownership of derived products. Brokers can enable interdisciplinary research but in reality are we just delaying the inevitable? A call to action is required adopt a transdiciplinary approach at the conception of development of new multi-disciplinary systems whereby those across all the scientific domains, the humanities, social sciences and beyond work together to create a unity of informatics plaforms that interoperate horizontally across the multiple discipline boundaries, and also operate vertically to enable a diversity of people to access data from high end researchers, to undergraduate, school students and the general public. Once we master such a transdisciplinary approach to our vast global information assets, we will then achieve

  3. Processing biological literature with customizable Web services supporting interoperable formats.

    PubMed

    Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia

    2014-01-01

    Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk.

  4. Semantics via Machine Translation

    ERIC Educational Resources Information Center

    Culhane, P. T.

    1977-01-01

    Recent experiments in machine translation have given the semantic elements of collocation in Russian more objective criteria. Soviet linguists in search of semantic relationships have attempted to devise a semantic synthesis for construction of a basic language for machine translation. One such effort is summarized. (CHK)

  5. SEMANTICS AND CRITICAL READING.

    ERIC Educational Resources Information Center

    FLANIGAN, MICHAEL C.

    PROFICIENCY IN CRITICAL READING CAN BE ACCELERATED BY MAKING STUDENTS AWARE OF VARIOUS SEMANTIC DEVICES THAT HELP CLARIFY MEANINGS AND PURPOSES. EXCERPTS FROM THE ARTICLE "TEEN-AGE CORRUPTION" FROM THE NINTH-GRADE SEMANTICS UNIT WRITTEN BY THE PROJECT ENGLISH DEMONSTRATION CENTER AT EUCLID, OHIO, ARE USED TO ILLUSTRATE HOW SEMANTICS RELATE TO…

  6. Data interoperability software solution for emergency reaction in the Europe Union

    NASA Astrophysics Data System (ADS)

    Casado, R.; Rubiera, E.; Sacristan, M.; Schütte, F.; Peters, R.

    2014-09-01

    Emergency management becomes more challenging in international crisis episodes because of cultural, semantic and linguistic differences between all stakeholders, especially first responders. Misunderstandings between first responders makes decision-making slower and more difficult. However, spread and development of networks and IT-based Emergency Management Systems (EMS) has improved emergency responses, becoming more coordinated. Despite improvements made in recent years, EMS have not still solved problems related to cultural, semantic and linguistic differences which are the real cause of slower decision-making. In addition, from a technical perspective, the consolidation of current EMS and the different formats used to exchange information offers another problem to be solved in any solution proposed for information interoperability between heterogeneous EMS surrounded by different contexts. To overcome these problems we present a software solution based on semantic and mediation technologies. EMERGency ELements (EMERGEL) (Fundacion CTIC and AntwortING Ingenieurbüro PartG 2013), a common and modular ontology shared by all the stakeholders, has been defined. It offers the best solution to gather all stakeholders' knowledge in a unique and flexible data model, taking into account different countries cultural linguistic issues. To deal with the diversity of data protocols and formats, we have designed a Service Oriented Architecture for Data Interoperability (named DISASTER) providing a flexible extensible solution to solve the mediation issues. Web Services have been adopted as specific technology to implement such paradigm that has the most significant academic and industrial visibility and attraction. Contributions of this work have been validated through the design and development of a cross-border realistic prototype scenario, actively involving both emergency managers and emergency first responders: the Netherlands-Germany border fire.

  7. Project Integration Architecture: Formulation of Semantic Parameters

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the intention to formulate parameter objects which convey meaningful semantic information. In so doing, it is expected that a level of automation can be achieved in the consumption of information content by PIA-consuming clients outside the programmatic boundary of a presenting PIA-wrapped application. This paper discusses the steps that have been recently taken in formulating such semantically-meaningful parameters.

  8. Stuart Sutton, Associate Professor, University of Washington iSchool: From Discourse Communities to the Semantic Web.

    ERIC Educational Resources Information Center

    Forsythe, Kathleen

    2002-01-01

    In this interview Professor Stuart Sutton discusses proliferation of metadata schemas as an outgrowth of various discourse communities as they find their niche on the semantic Web. Highlights include interoperability; cataloging tools, including GEMCat; and the role of librarians and information science education in the development of Internet…

  9. BioC interoperability track overview

    PubMed Central

    Comeau, Donald C.; Batista-Navarro, Riza Theresa; Dai, Hong-Jie; Islamaj Doğan, Rezarta; Jimeno Yepes, Antonio; Khare, Ritu; Lu, Zhiyong; Marques, Hernani; Mattingly, Carolyn J.; Neves, Mariana; Peng, Yifan; Rak, Rafal; Rinaldi, Fabio; Tsai, Richard Tzong-Han; Verspoor, Karin; Wiegers, Thomas C.; Wu, Cathy H.; Wilbur, W. John

    2014-01-01

    BioC is a new simple XML format for sharing biomedical text and annotations and libraries to read and write that format. This promotes the development of interoperable tools for natural language processing (NLP) of biomedical text. The interoperability track at the BioCreative IV workshop featured contributions using or highlighting the BioC format. These contributions included additional implementations of BioC, many new corpora in the format, biomedical NLP tools consuming and producing the format and online services using the format. The ease of use, broad support and rapidly growing number of tools demonstrate the need for and value of the BioC format. Database URL: http://bioc.sourceforge.net/ PMID:24980129

  10. Network effects, cascades and CCP interoperability

    NASA Astrophysics Data System (ADS)

    Feng, Xiaobing; Hu, Haibo; Pritsker, Matthew

    2014-03-01

    To control counterparty risk, financial regulations such as the Dodd Frank Act are increasingly requiring standardized derivatives trades to be cleared by central counterparties (CCPs). It is anticipated that in the near-term future, CCPs across the world will be linked through interoperability agreements that facilitate risk-sharing but also serve as a conduit for transmitting shocks. This paper theoretically studies a network with CCPs that are linked through interoperability arrangements, and studies the properties of the network that contribute to cascading failures. The magnitude of the cascading is theoretically related to the strength of network linkages, the size of the network, the logistic mapping coefficient, a stochastic effect and CCP's defense lines. Simulations indicate that larger network effects increase systemic risk from cascading failures. The size of the network N raises the threshold value of shock sizes that are required to generate cascades. Hence, the larger the network, the more robust it will be.

  11. Semantic networks of English.

    PubMed

    Miller, G A; Fellbaum, C

    1991-12-01

    Principles of lexical semantics developed in the course of building an on-line lexical database are discussed. The approach is relational rather than componential. The fundamental semantic relation is synonymy, which is required in order to define the lexicalized concepts that words can be used to express. Other semantic relations between these concepts are then described. No single set of semantic relations or organizational structure is adequate for the entire lexicon: nouns, adjectives, and verbs each have their own semantic relations and their own organization determined by the role they must play in the construction of linguistic messages.

  12. Testbed for Satellite and Terrestrial Interoperability (TSTI)

    NASA Technical Reports Server (NTRS)

    Gary, J. Patrick

    1998-01-01

    Various issues associated with the "Testbed for Satellite and Terrestrial Interoperability (TSTI)" are presented in viewgraph form. Specific topics include: 1) General and specific scientific technical objectives; 2) ACTS experiment No. 118: 622 Mbps network tests between ATDNet and MAGIC via ACTS; 3) ATDNet SONET/ATM gigabit network; 4) Testbed infrastructure, collaborations and end sites in TSTI based evaluations; 5) the Trans-Pacific digital library experiment; and 6) ESDCD on-going network projects.

  13. Future Interoperability of Camp Protection Systems (FICAPS)

    NASA Astrophysics Data System (ADS)

    Caron, Sylvie; Gündisch, Rainer; Marchand, Alain; Stahl, Karl-Hermann

    2013-05-01

    The FICAPS Project has been established as a Project of the European Defence Agency based on an initiative of Germany and France. Goal of this Project was to derive Guidelines, which by a proper implementation in future developments improve Camp Protection Systems (CPS) by enabling and improving interoperability between Camp Protection Systems and its Equipments of different Nations involved in multinational missions. These Guidelines shall allow for: • Real-time information exchange between equipments and systems of different suppliers and nations (even via SatCom), • Quick and easy replacement of equipments (even of different Nations) at run-time in the field by means of plug and play capability, thus lowering the operational and logistic costs and making the system highly available, • Enhancement of system capabilities (open and modular systems) by adding new equipment with new capabilities (just plug-in, automatic adjustment of the HMI Human Machine Interface) without costly and time consuming validation and test on system level (validation and test can be done on Equipment level), Four scenarios have been identified to summarize the interoperability requirements from an operational viewpoint. To prove the definitions given in the Guideline Document, a French and a German Demonstration System, based on existing national assets, were realized. Demonstrations, showing the capabilities given by the defined interoperability requirements with respect to the operational scenarios, were performed. Demonstrations included remote control of a CPS by another CPS, remote sensor control (Electro-Optic/InfraRed EO/IR) and remote effector control. This capability can be applied to extend the protection area or to protect distant infrastructural assets Demonstrations have been performed. The required interoperability functionality was shown successfully. Even if the focus of the FICAPS project was on camp protection, the solution found is also appropriate for other

  14. Designing Interoperable Data Products with Community Conventions

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Jelenak, A.; Lee, H.

    2015-12-01

    The HDF Product Designer (HPD) is a cloud-based client-server collaboration tool that can bring existing netCDF-3/4/CF, HDF4/5, and HDF-EOS2/5 products together to create new interoperable data products that serve the needs of the Earth Science community. The tool is designed to reduce the burden of creating and storing data in standards-compliant, interoperable HDF5 files and lower the technical and programming skill threshold needed to design such products by providing a user interface that combines the netCDF-4/HDF5 interoperable feature set with applicable metadata conventions. Users can collaborate quickly to devise new HDF5 products while at the same time seamlessly incorporating the latest best practices and conventions in their community by importing existing data products. The tool also incorporates some expert system features through CLIPS, allowing custom approaches in the file design, as well as easy transfer of preferred conventions as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from any interested parties is always welcome.

  15. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  16. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-03-07

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.

  17. The geographical ontology, LDAP, and the space information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Li, Deren

    2005-10-01

    The research purpose is to discuss the development trend and theory of the semantic integration and interoperability of Geography Information Systems on the network ages and to point out that the geography ontology is the foregone conclusion of the development of the semantic-based integration and interoperability of Geography Information Systems. After analyzing the effect by using the various new technologies, the paper proposes new idea for the family of the ontology class based on the GIS knowledge built here. They are the basic ontology, the domain ontology and the application ontology and are very useful for the sharing and transferring of the semantic information between the complicated distributed systems and object abstracting. The main contributions of the paper are as follows: 1) For the first time taking the ontology and LDAP (Lightweight Directory Access Protocol) in creating and optimizing the architecture of Spatial Information Gird and accelerating the fusion of Geography Information System and other domain's information systems. 2) For the first time, introducing a hybrid method to build geography ontology. This hybrid method mixes the excellence of the independent domain expert and data mining. It improves the efficiency of the method of the domain expert and builds ontology semi-automatically. 3) For the first time, implementing the many-to-many relationship of integration ontology system by LDAP's reference and creating ontology-based virtual organization that could provide transparent service to guests.

  18. Interoperability of satellite-based augmentation systems for aircraft navigation

    NASA Astrophysics Data System (ADS)

    Dai, Donghai

    The Federal Aviation Administration (FAA) is pioneering a transformation of the national airspace system from its present ground based navigation and landing systems to a satellite based system using the Global Positioning System (GPS). To meet the critical safety-of-life aviation positioning requirements, a Satellite-Based Augmentation System (SBAS), the Wide Area Augmentation System (WAAS), is being implemented to support navigation for all phases of flight, including Category I precision approach. The system is designed to be used as a primary means of navigation, capable of meeting the Required Navigation Performance (RNP), and therefore must satisfy the accuracy, integrity, continuity and availability requirements. In recent years there has been international acceptance of Global Navigation Satellite Systems (GNSS), spurring widespread growth in the independent development of SBASs. Besides the FAA's WAAS, the European Geostationary Navigation Overlay Service System (EGNOS) and the Japan Civil Aviation Bureau's MTSAT-Satellite Augmentation System (MSAS) are also being actively developed. Although all of these SBASs can operate as stand-alone, regional systems, there is increasing interest in linking these SBASs together to reduce costs while improving service coverage. This research investigated the coverage and availability improvements due to cooperative efforts among regional SBAS networks. The primary goal was to identify the optimal interoperation strategies in terms of performance, complexity and practicality. The core algorithms associated with the most promising concepts were developed and demonstrated. Experimental verification of the most promising concepts was conducted using data collected from a joint international test between the National Satellite Test Bed (NSTB) and the EGNOS System Test Bed (ESTB). This research clearly shows that a simple switch between SBASs made by the airborne equipment is the most effective choice for achieving the

  19. Trust Model to Enhance Security and Interoperability of Cloud Environment

    NASA Astrophysics Data System (ADS)

    Li, Wenjuan; Ping, Lingdi

    Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.

  20. An implementation of geospatial semantic catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Zhu, Xinyan; Du, Daosheng; Liu, Tingting

    2008-12-01

    Along with the development of earth observation technology, large amounts of geospatial information are accessible. There are also a lot of geospatial data and services which are shared on the Internet. However they vary in formats and are stored at various organizations leading to problems of data discovery, data interoperability and usability. The Open Geospatial Consortium (OGC) has developed standard service called catalogue in order to overcome this problem. The goal of a geospatial catalogue is to support a wide range of users in discovering relevant geographic data and services from heterogeneous and distributed repositories. But in most of geospatial catalogue services, the search functionality is limited to the direct match of keywords from metadata, the OGC catalogues may not return useful results as the used keywords often do not match with the meta-information stored in the catalogues. In this paper, we propose a geospatial semantic catalogue services that aims at overcoming this limitation.

  1. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  2. Developing enterprise collaboration: a methodology to implement and improve interoperability

    NASA Astrophysics Data System (ADS)

    Daclin, Nicolas; Chen, David; Vallespir, Bruno

    2016-06-01

    The aim of this article is to present a methodology for guiding enterprises to implement and improve interoperability. This methodology is based on three components: a framework of interoperability which structures specific solutions of interoperability and is composed of abstraction levels, views and approaches dimensions; a method to measure interoperability including interoperability before (maturity) and during (operational performances) a partnership; and a structured approach defining the steps of the methodology, from the expression of an enterprise's needs to implementation of solutions. The relationship which consistently relates these components forms the methodology and enables developing interoperability in a step-by-step manner. Each component of the methodology and the way it operates is presented. The overall approach is illustrated in a case study example on the basis of a process between a given company and its dealers. Conclusions and future perspectives are given at the end of the article.

  3. Common Patterns with End-to-end Interoperability for Data Access

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Potter, N.; Jones, M. B.

    2010-12-01

    file transfers. These options affect seamlessness in that they represent tradeoffs in new development (required for the first option) with cumbersome extra user actions (required by the last option). While the middle option, adding new functionality to an existing library (netCDF), is very appealing because practice has shown that it can be very effective over a wide range of clients, it's very hard to build these libraries because correctly writing a new implementation of an existing API that preserves the original's exact semantics can be a daunting task. In the example discussed here, we developed a new module for Kepler using OPeNDAP's Java API. This provided a way to leverage internal optimizations for data organization in Kepler and we felt that outweighed the additional cost of new development and the need for users to learn how to use a new Kepler module. While common storage formats and open standards play an important role in data access, our work with the Kepler workflow system reinforces the experience that matching the data models of the data server (source) and user client (sink) and choosing the most appropriate integration strategy are critical to achieving interoperability.

  4. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; McGuinness, D. L.

    2009-12-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.

  5. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    McGuinness, Deborah; Fox, Peter; Hendler, James

    2010-05-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF

  6. Telemedicine system interoperability architecture: concept description and architecture overview.

    SciTech Connect

    Craft, Richard Layne, II

    2004-05-01

    In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.

  7. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  8. Interoperability challenges in river discharge modelling

    NASA Astrophysics Data System (ADS)

    Santoro, Mattia; Schlummer, Manuela; Andres, Volker; Jirka, Simon; Looser, Ulrich; Mladek, Richard; Pappenberger, Florian; Strauch, Adrian; Utech, Michael; Zsoter, Ervin

    2014-05-01

    River discharge is a critical water cycle variable, as it integrates all the processes (e.g. runoff and evapotranspiration) occurring within a river basin and provides a hydrological output variable that can be readily measured. Its prediction is of invaluable help for many water-related areas such as water resources assessment and management, as well as flood protection and disaster mitigation. Observations of river discharge are very important for the calibration and validation of hydrological or coupled land, atmosphere and ocean models . This requires the use of data from different scientific domains (Water, Weather, etc.). Typically, such data are provided using different technological solutions and formats. This complicates the integration of new hydrological data sources into application systems. Therefore, a considerable effort is often spent on data access issues instead of the actual scientific question. In the context of the FP7 funded project GEOWOW (GEOSS Interoperability for Weather, Ocean and Water), the "River Discharge" use scenario was developed in order to combine river discharge observations data from the Global Runoff Data Center (GRDC) database and model outputs produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) predicting river discharge based on weather forecast information. In this presentation we describe interoperability solutions which were adopted in order to address the technological challenges of the "River Discharge" use scenario: 1) Development of a Hydrology Profile for the OGC SOS 2.0 standard; 2) Enhancement of the GEO DAB (Discovery and Access Broker) to support the use scenario: 2.1) Develop new interoperability arrangements for GRDC and ECMWF capacities; 2.2) Select multiple time series for comparison. The development of the above functionalities and tools aims to respond to the need of Water and Weather scientists to assess river discharge forecasting models.

  9. Chandrayaan-1 Data Interoperability using PDAP

    NASA Astrophysics Data System (ADS)

    Thakkar, Navita; Crichton, Daniel; Heather, David; Gopala Krishna, Barla; Srinivasan, T. P.; Prashar, Ajay

    Indian Space Science Data Center (ISSDC) at Bangalore is the custodian of all the data sets of the current and future science missions of ISRO.Chandrayaan-1 is the first among the planetary missions launched by ISRO. The data collected from all the instruments during the life time of Chandrayaan-1 is peer-reviewed and archived as a Long Term Archive(LTA)using the Planetary Data System standards (PDS 3) at the ISSDC. In order to increase the use of the data archived, it needs to be made accessible to the scientific community and academia in a seamless manner across the globe. The IPDA (International Planetary Data Alliance), among its objectives, has to allow the interoperability and interchange of planetary scientific data among the planetary community. It has recommended PDAP (Planetary Data Access Protocol) v1.0 for implementation as an interoperability protocol for accessing planetary data archives. PDAP is a simple protocol for retrieving planetary data from repositories through a uniform interface.PDAP compliance requires an access web service to be maintained with thecharacteristics of the Metadata Query web method and the Data Retrieval web method. The PDAP interface will provide the metadata services for Chandrayaan-1 datasets and return a list of candidate hits formatted as a VOTable. For each candidate hit, an access reference URL will is used to retrieve the real data.This will be integrated with the IPDA Registry and Search Services.This paper presents the prototype of interoperable systems for Chandrayaan-1 planetary datasets using PDAP.

  10. Space network interoperability panel (SNIP) study

    NASA Technical Reports Server (NTRS)

    Fahnestock, Dale; Yamada, Shigeo; Hara, Hideo; Lenhart, Klaus; Ryan, Thomas

    1992-01-01

    The history and status of the SNIP study conducted by NASA, ESA, and NASDA are reviewed. Particular attention is given to data relay systems development plans; agency load situations; cross support; the top managers agreement about implementation of S-band interoperability and accelerating the K-alpha band high data rate exploration; testing of actual systems; NASA interim architecture for an S-band era system to make NASA spacecraft and TDRSS/TDRS-II compatible with ESA and NASDA systems; tropical rainfall measuring mission support; S-band cross support; and K-alpha band status.

  11. OTF CCSDS SM and C Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.

    2008-01-01

    A presentation is provided to demonstrate the interoperability between two space flight Mission Operation Centers (MOCs) and to emulate telemetry, actions, and alert flows between the two centers. One framework uses a COTS C31 system that uses CORBA to interface to the local OTF data network. The second framework relies on current Houston MCC frameworks and ad hoc clients. Messaging relies on SM and C MAL, Core and Common Service formats, while the transport layer uses AMS. A centralized SM and C Registry uses HTTP/XML for transport/encoding. The project's status and progress are reviewed.

  12. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  13. Setting core standards: privacy, identity & interoperability.

    PubMed

    Manning, B; Benton, S

    2010-01-01

    This position paper focuses on strategic developments and underlying concepts emerging out of the standards and associated domains. It addresses the issue of personal privacy in the wider context of interoperability across an ever-growing range of e-health and social care support systems and processes. These will increasingly be driven by major growth in the elderly segment of national populations where unambiguous identification of both patients and care staff both in hospitals and the community will become significant issues. This is particularly so where remote patient monitoring and access control to personal data is concerned, and is further complicated where racial, cultural and linguistic barriers are prevalent. PMID:20543336

  14. Interoperability Standards for Medical Simulation Systems

    NASA Technical Reports Server (NTRS)

    Tolk, Andreas; Diallo, Saikou Y.; Padilla, Jose J.

    2012-01-01

    The Modeling and Simulation Community successfully developed and applied interoperability standards like the Distributed Interactive Simulation (DIS) protocol (IEEE 1278) and the High Level Architecture (HLA) (IEEE 1516). These standards were applied for world-wide distributed simulation events for several years. However, this paper shows that some of the assumptions and constraints underlying the philosophy of these current standards are not valid for Medical Simulation Systems. This paper describes the standards, the philosophy and the limits for medical applications and recommends necessary extensions of the standards to support medical simulation.

  15. A taxonomy of geospatial services for global service discovery and interoperability

    NASA Astrophysics Data System (ADS)

    Bai, Yuqi; Di, Liping; Wei, Yaxing

    2009-04-01

    Geospatial service taxonomies represent the knowledge about the characteristics of geospatial services from the enterprise, computational, information, engineering, infrastructure, or technology viewpoints. This paper presents a lightweight taxonomy of geospatial services with the aim of promoting the global sharing of and interoperability among geospatial service instances. This taxonomy focuses on the knowledge connected with service interoperability. As a hierarchical taxonomy, it consists of six layers: service category, service type, version, profile, binding and uniform resource name (URN), from the root down to the leaves. Each layer is composed of classification nodes, with each node identifying one classification concept. Each concept, with a concrete semantic meaning, can be used to classify service instances. The application of this classification scheme to the Global Earth Observation System of Systems (GEOSS) Component and Service registry is also introduced. The results of this study may lead to the further development of service taxonomy to thoroughly capture the knowledge about geospatial services. The lessons learned may be useful to others representing and manipulating geoscientific knowledge.

  16. Semantically linking in silico cancer models.

    PubMed

    Johnson, David; Connor, Anthony J; McKeever, Steve; Wang, Zhihui; Deisboeck, Thomas S; Quaiser, Tom; Shochat, Eliezer

    2014-01-01

    Multiscale models are commonplace in cancer modeling, where individual models acting on different biological scales are combined within a single, cohesive modeling framework. However, model composition gives rise to challenges in understanding interfaces and interactions between them. Based on specific domain expertise, typically these computational models are developed by separate research groups using different methodologies, programming languages, and parameters. This paper introduces a graph-based model for semantically linking computational cancer models via domain graphs that can help us better understand and explore combinations of models spanning multiple biological scales. We take the data model encoded by TumorML, an XML-based markup language for storing cancer models in online repositories, and transpose its model description elements into a graph-based representation. By taking such an approach, we can link domain models, such as controlled vocabularies, taxonomic schemes, and ontologies, with cancer model descriptions to better understand and explore relationships between models. The union of these graphs creates a connected property graph that links cancer models by categorizations, by computational compatibility, and by semantic interoperability, yielding a framework in which opportunities for exploration and discovery of combinations of models become possible. PMID:25520553

  17. Semantically Linking In Silico Cancer Models

    PubMed Central

    Johnson, David; Connor, Anthony J; McKeever, Steve; Wang, Zhihui; Deisboeck, Thomas S; Quaiser, Tom; Shochat, Eliezer

    2014-01-01

    Multiscale models are commonplace in cancer modeling, where individual models acting on different biological scales are combined within a single, cohesive modeling framework. However, model composition gives rise to challenges in understanding interfaces and interactions between them. Based on specific domain expertise, typically these computational models are developed by separate research groups using different methodologies, programming languages, and parameters. This paper introduces a graph-based model for semantically linking computational cancer models via domain graphs that can help us better understand and explore combinations of models spanning multiple biological scales. We take the data model encoded by TumorML, an XML-based markup language for storing cancer models in online repositories, and transpose its model description elements into a graph-based representation. By taking such an approach, we can link domain models, such as controlled vocabularies, taxonomic schemes, and ontologies, with cancer model descriptions to better understand and explore relationships between models. The union of these graphs creates a connected property graph that links cancer models by categorizations, by computational compatibility, and by semantic interoperability, yielding a framework in which opportunities for exploration and discovery of combinations of models become possible. PMID:25520553

  18. Toward an E-Government Semantic Platform

    NASA Astrophysics Data System (ADS)

    Sbodio, Marco Luca; Moulin, Claude; Benamou, Norbert; Barthès, Jean-Paul

    This chapter describes the major aspects of an e-government platform in which semantics underpins more traditional technologies in order to enable new capabilities and to overcome technical and cultural challenges. The design and development of such an e-government Semantic Platform has been conducted with the financial support of the European Commission through the Terregov research project: "Impact of e-government on Territorial Government Services" (Terregov 2008). The goal of this platform is to let local government and government agencies offer online access to their services in an interoperable way, and to allow them to participate in orchestrated processes involving services provided by multiple agencies. Implementing a business process through an electronic procedure is indeed a core goal in any networked organization. However, the field of e-government brings specific constraints to the operations allowed in procedures, especially concerning the flow of private citizens' data: because of legal reasons in most countries, such data are allowed to circulate only from agency to agency directly. In order to promote transparency and responsibility in e-government while respecting the specific constraints on data flows, Terregov supports the creation of centrally controlled orchestrated processes; while the cross agencies data flows are centrally managed, data flow directly across agencies.

  19. An Interoperable GridWorkflow Management System

    NASA Astrophysics Data System (ADS)

    Mirto, Maria; Passante, Marco; Epicoco, Italo; Aloisio, Giovanni

    A WorkFlow Management System (WFMS) is a fundamental componentenabling to integrate data, applications and a wide set of project resources. Although a number of scientific WFMSs support this task, many analysis pipelines require large-scale Grid computing infrastructures to cope with their high compute and storage requirements. Such scientific workflows complicate the management of resources, especially in cases where they are offered by several resource providers, managed by different Grid middleware, since resource access must be synchronised in advance to allow reliable workflow execution. Different types of Grid middleware such as gLite, Unicore and Globus are used around the world and may cause interoperability issues if applications involve two or more of them. In this paperwe describe the ProGenGrid Workflow Management System which the main goal is to provide interoperability among these different grid middleware when executing workflows. It allows the composition of batch; parameter sweep and MPI based jobs. The ProGenGrid engine implements the logic to execute such jobs by using a standard language OGF compliant such as JSDL that has been extended for this purpose. Currently, we are testing our system on some bioinformatics case studies in the International Laboratory of Bioinformatics (LIBI) Project (www.libi.it).

  20. Food product tracing technology capabilities and interoperability.

    PubMed

    Bhatt, Tejas; Zhang, Jianrong Janet

    2013-12-01

    Despite the best efforts of food safety and food defense professionals, contaminated food continues to enter the food supply. It is imperative that contaminated food be removed from the supply chain as quickly as possible to protect public health and stabilize markets. To solve this problem, scores of technology companies purport to have the most effective, economical product tracing system. This study sought to compare and contrast the effectiveness of these systems at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. It also determined if these systems can work together to better secure the food supply (their interoperability). Institute of Food Technologists (IFT) hypothesized that when technology providers are given a full set of supply-chain data, even for a multi-ingredient product, their systems will generally be able to trace a contaminated product forward and backward through the supply chain. However, when provided with only a portion of supply-chain data, even for a product with a straightforward supply chain, it was expected that interoperability of the systems will be lacking and that there will be difficulty collaborating to identify sources and/or recipients of potentially contaminated product. IFT provided supply-chain data for one complex product to 9 product tracing technology providers, and then compared and contrasted their effectiveness at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. A vertically integrated foodservice restaurant agreed to work with IFT to secure data from its supply chain for both a multi-ingredient and a simpler product. Potential multi-ingredient products considered included canned tuna, supreme pizza, and beef tacos. IFT ensured that all supply-chain data collected did not include any proprietary information or information that would otherwise

  1. Food product tracing technology capabilities and interoperability.

    PubMed

    Bhatt, Tejas; Zhang, Jianrong Janet

    2013-12-01

    Despite the best efforts of food safety and food defense professionals, contaminated food continues to enter the food supply. It is imperative that contaminated food be removed from the supply chain as quickly as possible to protect public health and stabilize markets. To solve this problem, scores of technology companies purport to have the most effective, economical product tracing system. This study sought to compare and contrast the effectiveness of these systems at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. It also determined if these systems can work together to better secure the food supply (their interoperability). Institute of Food Technologists (IFT) hypothesized that when technology providers are given a full set of supply-chain data, even for a multi-ingredient product, their systems will generally be able to trace a contaminated product forward and backward through the supply chain. However, when provided with only a portion of supply-chain data, even for a product with a straightforward supply chain, it was expected that interoperability of the systems will be lacking and that there will be difficulty collaborating to identify sources and/or recipients of potentially contaminated product. IFT provided supply-chain data for one complex product to 9 product tracing technology providers, and then compared and contrasted their effectiveness at analyzing product tracing information to identify the contaminated ingredient and likely source, as well as distribution of the product. A vertically integrated foodservice restaurant agreed to work with IFT to secure data from its supply chain for both a multi-ingredient and a simpler product. Potential multi-ingredient products considered included canned tuna, supreme pizza, and beef tacos. IFT ensured that all supply-chain data collected did not include any proprietary information or information that would otherwise

  2. Towards E-Society Policy Interoperability

    NASA Astrophysics Data System (ADS)

    Iannella, Renato

    The move towards the Policy-Oriented Web is destined to provide support for policy expression and management in the core web layers. One of the most promising areas that can drive this new technology adoption is e-Society communities. With so much user-generated content being shared by these social networks, there is the real danger that the implicit sharing rules that communities have developed over time will be lost in translation in the new digital communities. This will lead to a corresponding loss in confidence in e-Society sites. The Policy-Oriented Web attempts to turn the implicit into the explicit with a common framework for policy language interoperability and awareness. This paper reports on the policy driving factors from the Social Networks experiences using real-world use cases and scenarios. In particular, the key functions of policy-awareness - for privacy, rights, and identity - will be the driving force that enables the e-Society to appreciate new interoperable policy regimes.

  3. Semantics-informed cartography: the case of Piemonte Geological Map

    NASA Astrophysics Data System (ADS)

    Piana, Fabrizio; Lombardo, Vincenzo; Mimmo, Dario; Giardino, Marco; Fubelli, Giandomenico

    2016-04-01

    In modern digital geological maps, namely those supported by a large geo-database and devoted to dynamical, interactive representation on WMS-WebGIS services, there is the need to provide, in an explicit form, the geological assumptions used for the design and compilation of the database of the Map, and to get a definition and/or adoption of semantic representation and taxonomies, in order to achieve a formal and interoperable representation of the geologic knowledge. These approaches are fundamental for the integration and harmonisation of geological information and services across cultural (e.g. different scientific disciplines) and/or physical barriers (e.g. administrative boundaries). Initiatives such as GeoScience Markup Language (last version is GeoSciML 4.0, 2015, http://www.geosciml.org) and the INSPIRE "Data Specification on Geology" http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG) have been promoting information exchange of the geologic knowledge. Grounded on these standard vocabularies, schemas and data models, we provide a shared semantic classification of geological data referring to the study case of the synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap", developed by the CNR Institute of Geosciences and Earth Resources, Torino (CNR IGG TO) and hosted as a dynamical interactive map on the geoportal of ARPA Piemonte Environmental Agency. The Piemonte Geological Map is grounded on a regional-scale geo-database consisting of some hundreds of GeologicUnits whose thousands instances (Mapped Features, polygons geometry) widely occur in Piemonte region, and each one is bounded by GeologicStructures (Mapped Features, line geometry). GeologicUnits and GeologicStructures have been spatially

  4. 47 CFR 27.75 - Basic interoperability requirement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Basic interoperability requirement. 27.75 Section 27.75 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES MISCELLANEOUS WIRELESS COMMUNICATIONS SERVICES Technical Standards § 27.75 Basic interoperability...

  5. Interoperability of Demand Response Resources Demonstration in NY

    SciTech Connect

    Wellington, Andre

    2014-03-31

    The Interoperability of Demand Response Resources Demonstration in NY (Interoperability Project) was awarded to Con Edison in 2009. The objective of the project was to develop and demonstrate methodologies to enhance the ability of customer sited Demand Response resources to integrate more effectively with electric delivery companies and regional transmission organizations.

  6. NASA's Geospatial Interoperability Office(GIO)Program

    NASA Technical Reports Server (NTRS)

    Weir, Patricia

    2004-01-01

    NASA produces vast amounts of information about the Earth from satellites, supercomputer models, and other sources. These data are most useful when made easily accessible to NASA researchers and scientists, to NASA's partner Federal Agencies, and to society as a whole. A NASA goal is to apply its data for knowledge gain, decision support and understanding of Earth, and other planetary systems. The NASA Earth Science Enterprise (ESE) Geospatial Interoperability Office (GIO) Program leads the development, promotion and implementation of information technology standards that accelerate and expand the delivery of NASA's Earth system science research through integrated systems solutions. Our overarching goal is to make it easy for decision-makers, scientists and citizens to use NASA's science information. NASA's Federal partners currently participate with NASA and one another in the development and implementation of geospatial standards to ensure the most efficient and effective access to one another's data. Through the GIO, NASA participates with its Federal partners in implementing interoperability standards in support of E-Gov and the associated President's Management Agenda initiatives by collaborating on standards development. Through partnerships with government, private industry, education and communities the GIO works towards enhancing the ESE Applications Division in the area of National Applications and decision support systems. The GIO provides geospatial standards leadership within NASA, represents NASA on the Federal Geographic Data Committee (FGDC) Coordination Working Group and chairs the FGDC's Geospatial Applications and Interoperability Working Group (GAI) and supports development and implementation efforts such as Earth Science Gateway (ESG), Space Time Tool Kit and Web Map Services (WMS) Global Mosaic. The GIO supports NASA in the collection and dissemination of geospatial interoperability standards needs and progress throughout the agency including

  7. The Semantic Learning Organization

    ERIC Educational Resources Information Center

    Sicilia, Miguel-Angel; Lytras, Miltiadis D.

    2005-01-01

    Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…

  8. Communication: General Semantics Perspectives.

    ERIC Educational Resources Information Center

    Thayer, Lee, Ed.

    This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…

  9. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  10. Order Theoretical Semantic Recommendation

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.

    2013-07-23

    Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.

  11. Intelligent Discovery for Learning Objects Using Semantic Web Technologies

    ERIC Educational Resources Information Center

    Hsu, I-Ching

    2012-01-01

    The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…

  12. Development of a Ground Water Data Portal for Interoperable Data Exchange within the U.S. National Ground Water Monitoring Network and Beyond

    NASA Astrophysics Data System (ADS)

    Booth, N. L.; Brodaric, B.; Lucido, J. M.; Kuo, I.; Boisvert, E.; Cunningham, W. L.

    2011-12-01

    using the OGC Sensor Observation Service (SOS) standard. Ground Water Markup Language (GWML) encodes well log, lithology and construction information and is exchanged using the OGC Web Feature Service (WFS) standard. Within the NGWMN Data Portal, data exchange between distributed data provider repositories is achieved through the use of these web services and a central mediation hub, which performs both format (syntactic) and nomenclature (semantic) mediation, conforming heterogeneous inputs into common standards-based outputs. Through these common standards, interoperability between the U.S. NGWMN and Canada's Groundwater Information Network (GIN) is achieved, advancing a ground water virtual observatory across North America.

  13. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  14. Secure Interoperable Open Smart Grid Demonstration Project

    SciTech Connect

    Magee, Thoman

    2014-12-31

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), battery storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG

  15. Advancing translational research with the Semantic Web

    PubMed Central

    Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi

    2007-01-01

    Background A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of

  16. NASA and Industry Benefits of ACTS High Speed Network Interoperability Experiments

    NASA Technical Reports Server (NTRS)

    Zernic, M. J.; Beering, D. R.; Brooks, D. E.

    2000-01-01

    This paper provides synopses of the design. implementation, and results of key high data rate communications experiments utilizing the technologies of NASA's Advanced Communications Technology Satellite (ACTS). Specifically, the network protocol and interoperability performance aspects will be highlighted. The objectives of these key experiments will be discussed in their relevant context to NASA missions, as well as, to the comprehensive communications industry. Discussion of the experiment implementation will highlight the technical aspects of hybrid network connectivity, a variety of high-speed interoperability architectures, a variety of network node platforms, protocol layers, internet-based applications, and new work focused on distinguishing between link errors and congestion. In addition, this paper describes the impact of leveraging government-industry partnerships to achieve technical progress and forge synergistic relationships. These relationships will be the key to success as NASA seeks to combine commercially available technology with its own internal technology developments to realize more robust and cost effective communications for space operations.

  17. A health analytics semantic ETL service for obesity surveillance.

    PubMed

    Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G

    2015-01-01

    The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play.

  18. Are Meaningful Use Stage 2 certified EHRs ready for interoperability? Findings from the SMART C-CDA Collaborative

    PubMed Central

    D'Amore, John D; Mandel, Joshua C; Kreda, David A; Swain, Ashley; Koromia, George A; Sundareswaran, Sumesh; Alschuler, Liora; Dolin, Robert H; Mandl, Kenneth D; Kohane, Isaac S; Ramoni, Rachel B

    2014-01-01

    Background and objective Upgrades to electronic health record (EHR) systems scheduled to be introduced in the USA in 2014 will advance document interoperability between care providers. Specifically, the second stage of the federal incentive program for EHR adoption, known as Meaningful Use, requires use of the Consolidated Clinical Document Architecture (C-CDA) for document exchange. In an effort to examine and improve C-CDA based exchange, the SMART (Substitutable Medical Applications and Reusable Technology) C-CDA Collaborative brought together a group of certified EHR and other health information technology vendors. Materials and methods We examined the machine-readable content of collected samples for semantic correctness and consistency. This included parsing with the open-source BlueButton.js tool, testing with a validator used in EHR certification, scoring with an automated open-source tool, and manual inspection. We also conducted group and individual review sessions with participating vendors to understand their interpretation of C-CDA specifications and requirements. Results We contacted 107 health information technology organizations and collected 91 C-CDA sample documents from 21 distinct technologies. Manual and automated document inspection led to 615 observations of errors and data expression variation across represented technologies. Based upon our analysis and vendor discussions, we identified 11 specific areas that represent relevant barriers to the interoperability of C-CDA documents. Conclusions We identified errors and permissible heterogeneity in C-CDA documents that will limit semantic interoperability. Our findings also point to several practical opportunities to improve C-CDA document quality and exchange in the coming years. PMID:24970839

  19. Investigating the capabilities of semantic enrichment of 3D CityEngine data

    NASA Astrophysics Data System (ADS)

    Solou, Dimitra; Dimopoulou, Efi

    2016-08-01

    In recent years the development of technology and the lifting of several technical limitations, has brought the third dimension to the fore. The complexity of urban environments and the strong need for land administration, intensify the need of using a three-dimensional cadastral system. Despite the progress in the field of geographic information systems and 3D modeling techniques, there is no fully digital 3D cadastre. The existing geographic information systems and the different methods of three-dimensional modeling allow for better management, visualization and dissemination of information. Nevertheless, these opportunities cannot be totally exploited because of deficiencies in standardization and interoperability in these systems. Within this context, CityGML was developed as an international standard of the Open Geospatial Consortium (OGC) for 3D city models' representation and exchange. CityGML defines geometry and topology for city modeling, also focusing on semantic aspects of 3D city information. The scope of CityGML is to reach common terminology, also addressing the imperative need for interoperability and data integration, taking into account the number of available geographic information systems and modeling techniques. The aim of this paper is to develop an application for managing semantic information of a model generated based on procedural modeling. The model was initially implemented in CityEngine ESRI's software, and then imported to ArcGIS environment. Final goal was the original model's semantic enrichment and then its conversion to CityGML format. Semantic information management and interoperability seemed to be feasible by the use of the 3DCities Project ESRI tools, since its database structure ensures adding semantic information to the CityEngine model and therefore automatically convert to CityGML for advanced analysis and visualization in different application areas.

  20. A Defense of Semantic Minimalism

    ERIC Educational Resources Information Center

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  1. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  2. Semantic Theory: A Linguistic Perspective.

    ERIC Educational Resources Information Center

    Nilsen, Don L. F.; Nilsen, Alleen Pace

    This book attempts to bring linguists and language teachers up to date on the latest developments in semantics. A survey of the role of semantics in linguistics and other academic areas is followed by a historical perspective of semantics in American linguistics. Various semantic models are discussed. Anomaly, ambiguity, and discourse are…

  3. Wrapping and interoperating bioinformatics resources using CORBA.

    PubMed

    Stevens, R; Miller, C

    2000-02-01

    Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.

  4. Flexible solution for interoperable cloud healthcare systems.

    PubMed

    Vida, Mihaela Marcella; Lupşe, Oana Sorina; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena

    2012-01-01

    It is extremely important for the healthcare domain to have a standardized communication because will improve the quality of information and in the end the resulting benefits will improve the quality of patients' life. The standards proposed to be used are: HL7 CDA and CCD. For a better access to the medical data a solution based on cloud computing (CC) is investigated. CC is a technology that supports flexibility, seamless care, and reduced costs of the medical act. To ensure interoperability between healthcare information systems a solution creating a Web Custom Control is presented. The control shows the database tables and fields used to configure the two standards. This control will facilitate the work of the medical staff and hospital administrators, because they can configure the local system easily and prepare it for communication with other systems. The resulted information will have a higher quality and will provide knowledge that will support better patient management and diagnosis. PMID:22874196

  5. Khoros software specification format and interoperability

    NASA Technical Reports Server (NTRS)

    Rots, A. H.

    1992-01-01

    Khoros defines formats for User Interface Specification (UIS) and Program Specification (PS) files. From such files, its code generator, Ghostwriter, creates source files and documentation. The great advantage of the system is that the code fragments that make up part of the PS file are purely generic. All Khoros-related code is created by the code generator; this includes all user interface code. As a matter of fact, both specification files are very generic in nature. Thus, one could imagine using it as the basis for other software systems. A case is made that writing a code generator that would create IRAF-compatible code from the Khoros UIS and PS files is fairly trivial. Another aspect of the Khoros system conventions concerns the way execution commands are generated by the user interface and the actual syntax of those commands. The protocols are such that interoperability at the level of executable modules is readily possible.

  6. Managing interoperability and complexity in health systems.

    PubMed

    Bouamrane, M-M; Tao, C; Sarkar, I N

    2015-01-01

    In recent years, we have witnessed substantial progress in the use of clinical informatics systems to support clinicians during episodes of care, manage specialised domain knowledge, perform complex clinical data analysis and improve the management of health organisations' resources. However, the vision of fully integrated health information eco-systems, which provide relevant information and useful knowledge at the point-of-care, remains elusive. This journal Focus Theme reviews some of the enduring challenges of interoperability and complexity in clinical informatics systems. Furthermore, a range of approaches are proposed in order to address, harness and resolve some of the many remaining issues towards a greater integration of health information systems and extraction of useful or new knowledge from heterogeneous electronic data repositories.

  7. SHARP/PRONGHORN Interoperability: Mesh Generation

    SciTech Connect

    Avery Bingham; Javier Ortensi

    2012-09-01

    Progress toward collaboration between the SHARP and MOOSE computational frameworks has been demonstrated through sharing of mesh generation and ensuring mesh compatibility of both tools with MeshKit. MeshKit was used to build a three-dimensional, full-core very high temperature reactor (VHTR) reactor geometry with 120-degree symmetry, which was used to solve a neutron diffusion critical eigenvalue problem in PRONGHORN. PRONGHORN is an application of MOOSE that is capable of solving coupled neutron diffusion, heat conduction, and homogenized flow problems. The results were compared to a solution found on a 120-degree, reflected, three-dimensional VHTR mesh geometry generated by PRONGHORN. The ability to exchange compatible mesh geometries between the two codes is instrumental for future collaboration and interoperability. The results were found to be in good agreement between the two meshes, thus demonstrating the compatibility of the SHARP and MOOSE frameworks. This outcome makes future collaboration possible.

  8. Interoperability in encoded quantum repeater networks

    NASA Astrophysics Data System (ADS)

    Nagayama, Shota; Choi, Byung-Soo; Devitt, Simon; Suzuki, Shigeya; Van Meter, Rodney

    2016-04-01

    The future of quantum repeater networking will require interoperability between various error-correcting codes. A few specific code conversions and even a generalized method are known, however, no detailed analysis of these techniques in the context of quantum networking has been performed. In this paper we analyze a generalized procedure to create Bell pairs encoded heterogeneously between two separate codes used often in error-corrected quantum repeater network designs. We begin with a physical Bell pair and then encode each qubit in a different error-correcting code, using entanglement purification to increase the fidelity. We investigate three separate protocols for preparing the purified encoded Bell pair. We calculate the error probability of those schemes between the Steane [[7,1,3

  9. Towards Automatic Semantic Labelling of 3D City Models

    NASA Astrophysics Data System (ADS)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  10. A framework for semantic reconciliation of disparate earth observation thematic data

    NASA Astrophysics Data System (ADS)

    Durbha, S. S.; King, R. L.; Shah, V. P.; Younan, N. H.

    2009-04-01

    There is a growing demand for digital databases of topographic and thematic information for a multitude of applications in environmental management, and also in data integration and efficient updating of other spatially oriented data. These thematic data sets are highly heterogeneous in syntax, structure and semantics as they are produced and provided by a variety of agencies having different definitions, standards and applications of the data. In this paper, we focus on the semantic heterogeneity in thematic information sources, as it has been widely recognized that the semantic conflicts are responsible for the most serious data heterogeneity problems hindering the efficient interoperability between heterogeneous information sources. In particular, we focus on the semantic heterogeneities present in the land cover classification schemes corresponding to the global land cover characterization data. We propose a framework (semantics enabled thematic data Integration (SETI)) that describes in depth the methodology involved in the reconciliation of such semantic conflicts by adopting the emerging semantic web technologies. Ontologies were developed for the classification schemes and a shared-ontology approach for integrating the application level ontologies as described. We employ description logics (DL)-based reasoning on the terminological knowledge base developed for the land cover characterization which enables querying and retrieval that goes beyond keyword-based searches.

  11. Biodiversity information platforms: From standards to interoperability

    PubMed Central

    Berendsohn, W. G.; Güntsch, A.; Hoffmann, N.; Kohlbecker, A.; Luther, K.; Müller, A.

    2011-01-01

    Abstract One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems.Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols.The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure. PMID:22207807

  12. Enabling interoperability in Geoscience with GI-suite

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Papeschi, Fabrizio; Santoro, Mattia; Nativi, Stefano

    2015-04-01

    GI-suite is a brokering framework targeting interoperability of heterogeneous systems in the Geoscience domain. The framework is composed by different brokers each one focusing on a specific functionality: discovery, access and semantics (i.e. GI-cat, GI-axe, GI-sem). The brokering takes place between a set of heterogeneous publishing services and a set of heterogeneous consumer applications: the brokering target is represented by resources (e.g. coverages, features, or metadata information) required to seamlessly flow from the providers to the consumers. Different international and community standards are now supported by GI-suite, making possible the successful deployment of GI-suite in many international projects and initiatives (such as GEOSS, NSF BCube and several EU funded projects). As for the publisher side more than 40 standards and implementations are supported (e.g. Dublin Core, OAI-PMH, OGC W*S, Geonetwork, THREDDS Data Server, Hyrax Server, etc.). The support for each individual standard is provided by means of specific GI-suite components, called accessors. As for the consumer applications side more than 15 standards and implementations are supported (e.g. ESRI ArcGIS, Openlayers, OGC W*S, OAI-PMH clients, etc.). The support for each individual standard is provided by means of specific profiler components. The GI-suite can be used in different scenarios by different actors: - A data provider having a pre-existent data repository can deploy and configure GI-suite to broker it and making thus available its data resources through different protocols to many different users (e.g. for data discovery and/or data access) - A data consumer can use GI-suite to discover and/or access resources from a variety of publishing services that are already publishing data according to well-known standards. - A community can deploy and configure GI-suite to build a community (or project-specific) broker: GI-suite can broker a set of community related repositories and

  13. Trusting Crowdsourced Geospatial Semantics

    NASA Astrophysics Data System (ADS)

    Goodhue, P.; McNair, H.; Reitsma, F.

    2015-08-01

    The degree of trust one can place in information is one of the foremost limitations of crowdsourced geospatial information. As with the development of web technologies, the increased prevalence of semantics associated with geospatial information has increased accessibility and functionality. Semantics also provides an opportunity to extend indicators of trust for crowdsourced geospatial information that have largely focused on spatio-temporal and social aspects of that information. Comparing a feature's intrinsic and extrinsic properties to associated ontologies provides a means of semantically assessing the trustworthiness of crowdsourced geospatial information. The application of this approach to unconstrained semantic submissions then allows for a detailed assessment of the trust of these features whilst maintaining the descriptive thoroughness this mode of information submission affords. The resulting trust rating then becomes an attribute of the feature, providing not only an indication as to the trustworthiness of a specific feature but is able to be aggregated across multiple features to illustrate the overall trustworthiness of a dataset.

  14. Algebraic Semantics for Narrative

    ERIC Educational Resources Information Center

    Kahn, E.

    1974-01-01

    This paper uses discussion of Edmund Spenser's "The Faerie Queene" to present a theoretical framework for explaining the semantics of narrative discourse. The algebraic theory of finite automata is used. (CK)

  15. "Pre-semantic" cognition revisited: critical differences between semantic aphasia and semantic dementia.

    PubMed

    Jefferies, Elizabeth; Rogers, Timothy T; Hopper, Samantha; Ralph, Matthew A Lambon

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are atypical of the domain and "regularization errors" (irregular/atypical items are produced as if they were domain-typical). The emergence of this pattern across diverse tasks in the same patients indicates that semantic memory plays a key role in all of these types of "pre-semantic" processing. However, this claim remains controversial because semantically impaired patients sometimes fail to show an influence of regularity. This study demonstrates that (a) the location of brain damage and (b) the underlying nature of the semantic deficit affect the likelihood of observing the expected relationship between poor comprehension and regularity effects. We compared the effect of multimodal semantic impairment in the context of semantic dementia and stroke aphasia on the seven "pre-semantic" tasks listed above. In all of these tasks, the semantic aphasia patients were less sensitive to typicality than the semantic dementia patients, even though the two groups obtained comparable scores on semantic tests. The semantic aphasia group also made fewer regularization errors and many more unrelated and perseverative responses. We propose that these group differences reflect the different locus for the semantic impairment in the two conditions: patients with semantic dementia have degraded semantic representations, whereas semantic aphasia patients show deregulated semantic cognition with concomitant executive deficits. These findings suggest a reinterpretation of single-case studies of comprehension-impaired aphasic patients who fail to show the expected effect of regularity on "pre-semantic" tasks. Consequently, such cases do not demonstrate

  16. Towards Data Repository Interoperability: The Data Conservancy Data Packaging Specification

    NASA Astrophysics Data System (ADS)

    DiLauro, T.; Duerr, R.; Thessen, A. E.; Rippin, M.; Pralle, B.; Choudhury, G. S.

    2013-12-01

    description, the DCS instance will be able to provide default mappings for the directories and files within the package payload and enable support for deposited content at a lower level of service. Internally, the DCS will map these hybrid package serializations to its own internal business objects and their properties. Thus, this approach is highly extensible, as other packaging formats could be mapped in a similar manner. In addition, this scheme supports establishing the fixity of the payload while still supporting update of the semantic overlay data. This allows a data producer with scarce resources or an archivist who acquires a researcher's data to package the data for deposit with the intention of augmenting the resource description in the future. The Data Conservancy is partnering with the Sustainable Environment Actionable Data[4] project to test the interoperability of this new packaging mechanism. [1] Data Conservancy: http://dataconservancy.org/ [2] BagIt: https://datatracker.ietf.org/doc/draft-kunze-bagit/ [3] OAI-ORE: http://www.openarchives.org/ore/1.0/ [4] SEAD: http://sead-data.net/

  17. Enabling Interoperable Space Robots With the Joint Technical Architecture for Robotic Systems (JTARS)

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville

    2005-01-01

    Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.

  18. Using a single content model for eHealth interoperability and secondary use.

    PubMed

    Atalag, Koray

    2013-01-01

    This chapter describes a middle-out approach to eHealth interoperability, with strong oversight on public health and health research, enabled by a uniform and shared content model to which all health information exchange conforms. As described in New Zealand's Interoperability Reference Architecture, the content model borrows its top level organization from the Continuity of Care Record (CCR) standard and is underpinned by the openEHR formalism. This provides a canonical model for representing a variety of clinical information, and serves as reference when determining payload in health information exchange. The main premise of this approach is that since all exchanged data conforms to the same model, interoperability of clinical information can readily be achieved. Use of Archetypes ensures preservation of clinical context which is critical for secondary use. The content model is envisaged to grow incrementally by adding new or specialised archetypes as finer details are needed in real projects. The consistency and long term viability of this approach critically depends on effective governance which requires new models of collaboration, decision making and appropriate tooling to support the process. PMID:24018523

  19. Semantic Interoperable Electronic Patient Records: The Unfolding of Consensus based Archetypes.

    PubMed

    Pedersen, Rune; Wynn, Rolf; Ellingsen, Gunnar

    2015-01-01

    This paper is a status report from a large-scale openEHR-based EPR project from the North Norway Regional Health Authority encouraged by the unfolding of a national repository for openEHR archetypes. Clinicians need to engage in, and be responsible for the production of archetypes. The consensus processes have so far been challenged by a low number of active clinicians, a lack of critical specialties to reach consensus, and a cumbersome review process (3 or 4 review rounds) for each archetype. The goal is to have several clinicians from each specialty as a backup if one is hampered to participate. Archetypes and their importance for structured data and sharing of information has to become more visible for the clinicians through more sharpened information practice. PMID:25991124

  20. CCSDS SM and C Mission Operations Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Lucord, Steven A.

    2010-01-01

    This slide presentation reviews the prototype of the Spacecraft Monitor and Control (SM&C) Operations for interoperability among other space agencies. This particular prototype uses the German Space Agency (DLR) to test the ideas for interagency coordination.

  1. Interoperability of Repositories: The Simple Query Interface in ARIADNE

    ERIC Educational Resources Information Center

    Ternier, Stefaan; Duval, Erik

    2006-01-01

    This article reports on our experiences in providing interoperability between the ARIADNE knowledge pool system (KPS) (Duval, Forte, Cardinaels, Verhoeven, Van Durm, Hendrickx et al., 2001) and several other heterogeneous learning object repositories and referatories.

  2. An Ontological Solution to Support Interoperability in the Textile Industry

    NASA Astrophysics Data System (ADS)

    Duque, Arantxa; Campos, Cristina; Jiménez-Ruiz, Ernesto; Chalmeta, Ricardo

    Significant developments in information and communication technologies and challenging market conditions have forced enterprises to adapt their way of doing business. In this context, providing mechanisms to guarantee interoperability among heterogeneous organisations has become a critical issue. Even though prolific research has already been conducted in the area of enterprise interoperability, we have found that enterprises still struggle to introduce fully interoperable solutions, especially, in terms of the development and application of ontologies. Thus, the aim of this paper is to introduce basic ontology concepts in a simple manner and to explain the advantages of the use of ontologies to improve interoperability. We will also present a case study showing the implementation of an application ontology for an enterprise in the textile/clothing sector.

  3. Reuse and Interoperability of Avionics for Space Systems

    NASA Technical Reports Server (NTRS)

    Hodson, Robert F.

    2007-01-01

    The space environment presents unique challenges for avionics. Launch survivability, thermal management, radiation protection, and other factors are important for successful space designs. Many existing avionics designs use custom hardware and software to meet the requirements of space systems. Although some space vendors have moved more towards a standard product line approach to avionics, the space industry still lacks similar standards and common practices for avionics development. This lack of commonality manifests itself in limited reuse and a lack of interoperability. To address NASA s need for interoperable avionics that facilitate reuse, several hardware and software approaches are discussed. Experiences with existing space boards and the application of terrestrial standards is outlined. Enhancements and extensions to these standards are considered. A modular stack-based approach to space avionics is presented. Software and reconfigurable logic cores are considered for extending interoperability and reuse. Finally, some of the issues associated with the design of reusable interoperable avionics are discussed.

  4. 78 FR 14793 - Advancing Interoperability and Health Information Exchange

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... of the National Coordinator (ONC) for Health IT (HIT) Certification Program are increasing standards..., laboratories, nursing homes, home health agencies, hospices, rural health clinics, ambulatory surgical centers... Interoperability and Health Information Exchange AGENCY: Office of the National Coordinator for Health...

  5. A Proposed Information Architecture for Telehealth System Interoperability

    SciTech Connect

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.; Garica, R.J.; Parks, R.C.; Warren, S.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.

  6. A Proposed Information Architecture for Telehealth System Interoperability

    SciTech Connect

    Warren, S.; Craft, R.L.; Parks, R.C.; Gallagher, L.K.; Garcia, R.J.; Funkhouser, D.R.

    1999-04-07

    Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.

  7. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  8. ISAIA: Interoperable Systems for Archival Information Access

    NASA Technical Reports Server (NTRS)

    Hanisch, Robert J.

    2002-01-01

    The ISAIA project was originally proposed in 1999 as a successor to the informal AstroBrowse project. AstroBrowse, which provided a data location service for astronomical archives and catalogs, was a first step toward data system integration and interoperability. The goals of ISAIA were ambitious: '...To develop an interdisciplinary data location and integration service for space science. Building upon existing data services and communications protocols, this service will allow users to transparently query hundreds or thousands of WWW-based resources (catalogs, data, computational resources, bibliographic references, etc.) from a single interface. The service will collect responses from various resources and integrate them in a seamless fashion for display and manipulation by the user.' Funding was approved only for a one-year pilot study, a decision that in retrospect was wise given the rapid changes in information technology in the past few years and the emergence of the Virtual Observatory initiatives in the US and worldwide. Indeed, the ISAIA pilot study was influential in shaping the science goals, system design, metadata standards, and technology choices for the virtual observatory. The ISAIA pilot project also helped to cement working relationships among the NASA data centers, US ground-based observatories, and international data centers. The ISAIA project was formed as a collaborative effort between thirteen institutions that provided data to astronomers, space physicists, and planetary scientists. Among the fruits we ultimately hoped would come from this project would be a central site on the Web that any space scientist could use to efficiently locate existing data relevant to a particular scientific question. Furthermore, we hoped that the needed technology would be general enough to allow smaller, more-focused community within space science could use the same technologies and standards to provide more specialized services. A major challenge to searching

  9. Frame semantics-based study of verbs across medical genres.

    PubMed

    Wandji Tchami, Ornella; L'Homme, Marie-Claude; Grabar, Natalia

    2014-01-01

    The field of medicine gathers actors with different levels of expertise. These actors must interact, although their mutual understanding is not always completely successful. We propose to study corpora (with high and low levels of expertise) in order to observe their specificities. More specifically, we perform a contrastive analysis of verbs, and of the syntactic and semantic features of their participants, based on the Frame Semantics framework and the methodology implemented in FrameNet. In order to achieve this, we use an existing medical terminology to automatically annotate the semantics classes of participants of verbs, which we assume are indicative of semantics roles. Our results indicate that verbs show similar or very close semantics in some contexts, while in other contexts they behave differently. These results are important for studying the understanding of medical information by patients and for improving the communication between patients and medical doctors.

  10. Semantics in NETMAR (open service NETwork for MARine environmental data)

    NASA Astrophysics Data System (ADS)

    Leadbetter, Adam; Lowry, Roy; Clements, Oliver

    2010-05-01

    Over recent years, there has been a proliferation of environmental data portals utilising a wide range of systems and services, many of which cannot interoperate. The European Union Framework 7 project NETMAR (that commenced February 2010) aims to provide a toolkit for building such portals in a coherent manner through the use of chained Open Geospatial Consortium Web Services (WxS), OPeNDAP file access and W3C standards controlled by a Business Process Execution Language workflow. As such, the end product will be configurable by user communities interested in developing a portal for marine environmental data, and will offer search, download and integration tools for a range of satellite, model and observed data from open ocean and coastal areas. Further processing of these data will also be available in order to provide statistics and derived products suitable for decision making in the chosen environmental domain. In order to make the resulting portals truly interoperable, the NETMAR programme requires a detailed definition of the semantics of the services being called and the data which are being requested. A key goal of the NETMAR programme is, therefore, to develop a multi-domain and multilingual ontology of marine data and services. This will allow searches across both human languages and across scientific domains. The approach taken will be to analyse existing semantic resources and provide mappings between them, gluing together the definitions, semantics and workflows of the WxS services. The mappings between terms aim to be more general than the standard "narrower than", "broader than" type seen in the thesauri or simple ontologies implemented by previous programmes. Tools for the development and population of ontologoies will also be provided by NETMAR as there will be instances in which existing resources cannot sufficiently describe newly encountered data or services.

  11. Ensuring Sustainable Data Interoperability Across the Natural and Social Sciences

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Chen, R. S.

    2015-12-01

    Both the natural and social science data communities are attempting to address the long-term sustainability of their data infrastructures in rapidly changing research, technological, and policy environments. Many parts of these communities are also considering how to improve the interoperability and integration of their data and systems across natural, social, health, and other domains. However, these efforts have generally been undertaken in parallel, with little thought about how different sustainability approaches may impact long-term interoperability from scientific, legal, or economic perspectives, or vice versa, i.e., how improved interoperability could enhance—or threaten—infrastructure sustainability. Scientific progress depends substantially on the ability to learn from the legacy of previous work available for current and future scientists to study, often by integrating disparate data not previously assembled. Digital data are less likely than scientific publications to be usable in the future unless they are managed by science-oriented repositories that can support long-term data access with the documentation and services needed for future interoperability. We summarize recent discussions in the social and natural science communities on emerging approaches to sustainability and relevant interoperability activities, including efforts by the Belmont Forum E-Infrastructures project to address global change data infrastructure needs; the Group on Earth Observations to further implement data sharing and improve data management across diverse societal benefit areas; and the Research Data Alliance to develop legal interoperability principles and guidelines and to address challenges faced by domain repositories. We also examine emerging needs for data interoperability in the context of the post-2015 development agenda and the expected set of Sustainable Development Goals (SDGs), which set ambitious targets for sustainable development, poverty reduction, and

  12. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems.

  13. A semantically-aided approach for online annotation and retrieval of medical images.

    PubMed

    Kyriazos, George K; Gerostathopoulos, Ilias Th; Kolias, Vassileios D; Stoitsis, John S; Nikita, Konstantina S

    2011-01-01

    The need for annotating the continuously increasing volume of medical image data is recognized from medical experts for a variety of purposes, regardless if this is medical practice, research or education. The rich information content latent in medical images can be made explicit and formal with the use of well-defined ontologies. Evolution of the Semantic Web now offers a unique opportunity of a web-based, service-oriented approach. Remote access to FMA and ICD-10 reference ontologies provides the ontological annotation framework. The proposed system utilizes this infrastructure to provide a customizable and robust annotation procedure. It also provides an intelligent search mechanism indicating the advantages of semantic over keyword search. The common representation layer discussed facilitates interoperability between institutions and systems, while semantic content enables inference and knowledge integration.

  14. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems. PMID:23247259

  15. The HDF Product Designer - Interoperability in the First Mile

    NASA Astrophysics Data System (ADS)

    Lee, H.; Jelenak, A.; Habermann, T.

    2014-12-01

    Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.

  16. The advanced microgrid. Integration and interoperability

    SciTech Connect

    Bower, Ward Isaac; Ton, Dan T.; Guttromson, Ross; Glover, Steven F; Stamp, Jason Edwin; Bhatnagar, Dhruv; Reilly, Jim

    2014-02-01

    This white paper focuses on "advanced microgrids," but sections do, out of necessity, reference today's commercially available systems and installations in order to clearly distinguish the differences and advances. Advanced microgrids have been identified as being a necessary part of the modern electrical grid through a two DOE microgrid workshops, the National Institute of Standards and Technology, Smart Grid Interoperability Panel and other related sources. With their grid-interconnectivity advantages, advanced microgrids will improve system energy efficiency and reliability and provide enabling technologies for grid-independence to end-user sites. One popular definition that has been evolved and is used in multiple references is that a microgrid is a group of interconnected loads and distributed-energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both grid-connected or island-mode. Further, an advanced microgrid can then be loosely defined as a dynamic microgrid.

  17. Interoperable Data Sharing for Diverse Scientific Disciplines

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean

    2016-04-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.

  18. Key pillars of data interoperability in Earth Sciences - INSPIRE and beyond

    NASA Astrophysics Data System (ADS)

    Tomas, Robert; Lutz, Michael

    2013-04-01

    The well-known heterogeneity and fragmentation of data models, formats and controlled vocabularies of environmental data limit potential data users from utilising the wealth of environmental information available today across Europe. The main aim of INSPIRE1 is to improve this situation and give users possibility to access, use and correctly interpret environmental data. Over the past years number of INSPIRE technical guidelines (TG) and implementing rules (IR) for interoperability have been developed, involving hundreds of domain experts from across Europe. The data interoperability specifications, which have been developed for all 34 INSPIRE spatial data themes2, are the central component of the TG and IR. Several of these themes are related to the earth sciences, e.g. geology (including hydrogeology, geophysics and geomorphology), mineral and energy resources, soil science, natural hazards, meteorology, oceanography, hydrology and land cover. The following main pillars for data interoperability and harmonisation have been identified during the development of the specifications: Conceptual data models describe the spatial objects and their properties and relationships for the different spatial data themes. To achieve cross-domain harmonization, the data models for all themes are based on a common modelling framework (the INSPIRE Generic Conceptual Model3) and managed in a common UML repository. Harmonised vocabularies (or code lists) are to be used in data exchange in order to overcome interoperability issues caused by heterogeneous free-text and/or multi-lingual content. Since a mapping to a harmonized vocabulary could be difficult, the INSPIRE data models typically allow the provision of more specific terms from local vocabularies in addition to the harmonized terms - utilizing either the extensibility options or additional terminological attributes. Encoding. Currently, specific XML profiles of the Geography Markup Language (GML) are promoted as the standard

  19. A Relation Routing Scheme for Distributed Semantic Media Query

    PubMed Central

    Liao, Zhuhua; Zhang, Guoqiang; Yi, Aiping; Zhang, Guoqing; Liang, Wei

    2013-01-01

    Performing complex semantic queries over large-scale distributed media contents is a challenging task for rich media applications. The dynamics and openness of data sources make it uneasy to realize a query scheme that simultaneously achieves precision, scalability, and reliability. In this paper, a novel relation routing scheme (RRS) is proposed by renovating the routing model of Content Centric Network (CCN) for directly querying large-scale semantic media content. By using proper query model and routing mechanism, semantic queries with complex relation constrains from users can be guided towards potential media sources through semantic guider nodes. The scattered and fragmented query results can be integrated on their way back for semantic needs or to avoid duplication. Several new techniques, such as semantic-based naming, incomplete response avoidance, timeout checking, and semantic integration, are developed in this paper to improve the accuracy, efficiency, and practicality of the proposed approach. Both analytical and experimental results show that the proposed scheme is a promising and effective solution for complex semantic queries and integration over large-scale networks. PMID:24319383

  20. Supervised learning of semantic classes for image annotation and retrieval.

    PubMed

    Carneiro, Gustavo; Chan, Antoni B; Moreno, Pedro J; Vasconcelos, Nuno

    2007-03-01

    A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning.

  1. Semantic home video categorization

    NASA Astrophysics Data System (ADS)

    Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man

    2009-02-01

    Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.

  2. Semantic Parameters of Split Intransitivity.

    ERIC Educational Resources Information Center

    Van Valin, Jr., Robert D.

    1990-01-01

    This paper argues that split-intransitive phenomena are better explained in semantic terms. A semantic analysis is carried out in Role and Reference Grammar, which assumes the theory of verb classification proposed in Dowty 1979. (49 references) (JL)

  3. A semantically-aided architecture for a web-based monitoring system for carotid atherosclerosis.

    PubMed

    Kolias, Vassileios D; Stamou, Giorgos; Golemati, Spyretta; Stoitsis, Giannis; Gkekas, Christos D; Liapis, Christos D; Nikita, Konstantina S

    2015-08-01

    Carotid atherosclerosis is a multifactorial disease and its clinical diagnosis depends on the evaluation of heterogeneous clinical data, such as imaging exams, biochemical tests and the patient's clinical history. The lack of interoperability between Health Information Systems (HIS) does not allow the physicians to acquire all the necessary data for the diagnostic process. In this paper, a semantically-aided architecture is proposed for a web-based monitoring system for carotid atherosclerosis that is able to gather and unify heterogeneous data with the use of an ontology and to create a common interface for data access enhancing the interoperability of HIS. The architecture is based on an application ontology of carotid atherosclerosis that is used to (a) integrate heterogeneous data sources on the basis of semantic representation and ontological reasoning and (b) access the critical information using SPARQL query rewriting and ontology-based data access services. The architecture was tested over a carotid atherosclerosis dataset consisting of the imaging exams and the clinical profile of 233 patients, using a set of complex queries, constructed by the physicians. The proposed architecture was evaluated with respect to the complexity of the queries that the physicians could make and the retrieval speed. The proposed architecture gave promising results in terms of interoperability, data integration of heterogeneous sources with an ontological way and expanded capabilities of query and retrieval in HIS. PMID:26736524

  4. A semantically-aided architecture for a web-based monitoring system for carotid atherosclerosis.

    PubMed

    Kolias, Vassileios D; Stamou, Giorgos; Golemati, Spyretta; Stoitsis, Giannis; Gkekas, Christos D; Liapis, Christos D; Nikita, Konstantina S

    2015-08-01

    Carotid atherosclerosis is a multifactorial disease and its clinical diagnosis depends on the evaluation of heterogeneous clinical data, such as imaging exams, biochemical tests and the patient's clinical history. The lack of interoperability between Health Information Systems (HIS) does not allow the physicians to acquire all the necessary data for the diagnostic process. In this paper, a semantically-aided architecture is proposed for a web-based monitoring system for carotid atherosclerosis that is able to gather and unify heterogeneous data with the use of an ontology and to create a common interface for data access enhancing the interoperability of HIS. The architecture is based on an application ontology of carotid atherosclerosis that is used to (a) integrate heterogeneous data sources on the basis of semantic representation and ontological reasoning and (b) access the critical information using SPARQL query rewriting and ontology-based data access services. The architecture was tested over a carotid atherosclerosis dataset consisting of the imaging exams and the clinical profile of 233 patients, using a set of complex queries, constructed by the physicians. The proposed architecture was evaluated with respect to the complexity of the queries that the physicians could make and the retrieval speed. The proposed architecture gave promising results in terms of interoperability, data integration of heterogeneous sources with an ontological way and expanded capabilities of query and retrieval in HIS.

  5. e-Science and biological pathway semantics

    PubMed Central

    Luciano, Joanne S; Stevens, Robert D

    2007-01-01

    Background The development of e-Science presents a major set of opportunities and challenges for the future progress of biological and life scientific research. Major new tools are required and corresponding demands are placed on the high-throughput data generated and used in these processes. Nowhere is the demand greater than in the semantic integration of these data. Semantic Web tools and technologies afford the chance to achieve this semantic integration. Since pathway knowledge is central to much of the scientific research today it is a good test-bed for semantic integration. Within the context of biological pathways, the BioPAX initiative, part of a broader movement towards the standardization and integration of life science databases, forms a necessary prerequisite for its successful application of e-Science in health care and life science research. This paper examines whether BioPAX, an effort to overcome the barrier of disparate and heterogeneous pathway data sources, addresses the needs of e-Science. Results We demonstrate how BioPAX pathway data can be used to ask and answer some useful biological questions. We find that BioPAX comes close to meeting a broad range of e-Science needs, but certain semantic weaknesses mean that these goals are missed. We make a series of recommendations for re-modeling some aspects of BioPAX to better meet these needs. Conclusion Once these semantic weaknesses are addressed, it will be possible to integrate pathway information in a manner that would be useful in e-Science. PMID:17493286

  6. The semantic priming project.

    PubMed

    Hutchison, Keith A; Balota, David A; Neely, James H; Cortese, Michael J; Cohen-Shikora, Emily R; Tse, Chi-Shing; Yap, Melvin J; Bengson, Jesse J; Niemeyer, Dale; Buchanan, Erin

    2013-12-01

    Speeded naming and lexical decision data for 1,661 target words following related and unrelated primes were collected from 768 subjects across four different universities. These behavioral measures have been integrated with demographic information for each subject and descriptive characteristics for every item. Subjects also completed portions of the Woodcock-Johnson reading battery, three attentional control tasks, and a circadian rhythm measure. These data are available at a user-friendly Internet-based repository ( http://spp.montana.edu ). This Web site includes a search engine designed to generate lists of prime-target pairs with specific characteristics (e.g., length, frequency, associative strength, latent semantic similarity, priming effect in standardized and raw reaction times). We illustrate the types of questions that can be addressed via the Semantic Priming Project. These data represent the largest behavioral database on semantic priming and are available to researchers to aid in selecting stimuli, testing theories, and reducing potential confounds in their studies.

  7. Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences

    NASA Astrophysics Data System (ADS)

    Smith, P., II

    2015-12-01

    Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to Earth

  8. Hera: Engineering Web Applications Using Semantic Web-based Models

    NASA Astrophysics Data System (ADS)

    van der Sluijs, Kees; Houben, Geert-Jan; Leonardi, Erwin; Hidders, Jan

    In this chapter, we consider the contribution of models and model-driven approaches based on Semantic Web for the development of Web applications. The model-driven web engineering approach, that separates concerns on different abstraction level in the application design process, allows for more robust and structural design of web applications. This is illustrated by the use of Hera, an approach from the class of Web engineering methods that relies on models expressed using RDF(S) and an RDF(S) query language. It illustrates how models and in particular models that fit with the ideas and concepts from the Semantic Web allow to approach the design and engineering of modern, open and heterogeneous Web based systems. In the presented approach, adaptation and personalization are a main aspect and it is illustrated how they are expressed using semantic data models and languages. Also specific features of Hera are discussed, like interoperability between applications in user modeling, aspect orientation in Web design and graphical tool support for Web application design.

  9. GEO Standard and Interoperability Forum (SIF) European Team

    NASA Astrophysics Data System (ADS)

    Nativi, Stefano

    2010-05-01

    The European GEO SIF has been initiated by the GIGAS project in an effort to better coordinate European requirements for GEO and GEOSS related activities, and is recognised by GEO as a regional SIF. To help advance the interoperability goals of the Global Earth Observing System of Systems (GEOSS), the Group on Earth Observations (GEO) Architecture and Data Committee (ADC) has established a Standards and Interoperability Forum (SIF) to support GEO organizations offering components and services to GEOSS. The SIF will help GEOSS contributors understand how to work with the GEOSS interoperability guidelines and how to enter their "interoperability arrangements" (standards or other ad hoc arrangements for interoperability) into the GEOSS registries. This will greatly facilitate the utility of GEOSS and encourage significant increase in participation. To carry out its work most effectively, the SIF promotes to form Regional Teams. They will help to organize and optimize the support coming from the different parts of the World and reach out regional and multi-disciplinary Scientific Communities. This will allow to have true global representation in supporting GEOSS interoperability. A SIF European Team is foreseen. The main role of the SIF is facilitating interoperability and working with members and participating organizations as they offer data and information services to the users of GEOSS. In this framework, the purpose of having a European Regional Team is to increase efficiency in carrying out the work of the SIF. Experts can join the SIF European Team by registering at the SIF European Team wiki site: http://www.thegigasforum.eu/sif/

  10. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  11. Optimal Care Mother-Baby and Outcomes through Community-wide Data Sharing, Interoperability and Connectivity.

    PubMed

    Shaha, Steven H; Gilbert-Bradley, Diane

    2015-01-01

    The power of interoperable systems with data/information integration, central to achieving the goals of Telehealth, is illustrated through mutually beneficial sharing between Labor & Delivery (L&D) and Obstetrics (OBs) Clinics. Data shared between L&D and OB brought improved practice patterns and outcomes, and increased satisfaction at both. Staffing and skillsets were significantly improved by knowing complications arriving and anticipated volumes. OBs increased clinic efficiencies and improved patient-direct care time with improved clinical and cost outcomes. PMID:25980718

  12. Advances in Multi-disciplinary Interoperability

    NASA Astrophysics Data System (ADS)

    Pearlman, J.; Nativi, S.; Craglia, M.; Huerta, J.; Rubio-Iglesias, J. M.; Serrano, J. J.

    2012-04-01

    The challenge for addressing issues such as climate change, food security or ecosystem sustainability is that they require multi-disciplinary collaboration and the ability to integrate information across scientific domains. Multidisciplinary collaborations are difficult because each discipline has its own "language", protocols and formats for communicating within its community and handling data and information. EuroGEOSS demonstrates the added value to the scientific community and to society of making existing systems and applications interoperable and useful within the GEOSS and INSPIRE frameworks. In 2010, the project built an initial operating capacity of a multi-disciplinary Information System addressing three areas: drought, forestry and biodiversity. It is now furthering this development into an advanced operating capacity (http://www.eurogeoss.eu). The key to this capability is the creation of a broker that supports access to multiple resources through a common user interface and the automation of data search and access using state of the art information technology. EuroGEOSS hosted a conference on information systems and multi-disciplinary applications of science and technology. "EuroGEOSS: advancing the vision of GEOSS" provided a forum for developers, users and decision-makers working with advanced multi-disciplinary information systems to improve science and decisions for complex societal issues. In particular, the Conference addressed: Information systems for supporting multi-disciplinary research; Information systems and modeling for biodiversity, drought, forestry and related societal benefit areas; and Case studies of multi-disciplinary applications and outcomes. This paper will discuss the major finding of the conference and the directions for future development.

  13. Interoperable atlases of the human brain.

    PubMed

    Amunts, K; Hawrylycz, M J; Van Essen, D C; Van Horn, J D; Harel, N; Poline, J-B; De Martino, F; Bjaalie, J G; Dehaene-Lambertz, G; Dehaene, S; Valdes-Sosa, P; Thirion, B; Zilles, K; Hill, S L; Abrams, M B; Tass, P A; Vanduffel, W; Evans, A C; Eickhoff, S B

    2014-10-01

    The last two decades have seen an unprecedented development of human brain mapping approaches at various spatial and temporal scales. Together, these have provided a large fundus of information on many different aspects of the human brain including micro- and macrostructural segregation, regional specialization of function, connectivity, and temporal dynamics. Atlases are central in order to integrate such diverse information in a topographically meaningful way. It is noteworthy, that the brain mapping field has been developed along several major lines such as structure vs. function, postmortem vs. in vivo, individual features of the brain vs. population-based aspects, or slow vs. fast dynamics. In order to understand human brain organization, however, it seems inevitable that these different lines are integrated and combined into a multimodal human brain model. To this aim, we held a workshop to determine the constraints of a multi-modal human brain model that are needed to enable (i) an integration of different spatial and temporal scales and data modalities into a common reference system, and (ii) efficient data exchange and analysis. As detailed in this report, to arrive at fully interoperable atlases of the human brain will still require much work at the frontiers of data acquisition, analysis, and representation. Among them, the latter may provide the most challenging task, in particular when it comes to representing features of vastly different scales of space, time and abstraction. The potential benefits of such endeavor, however, clearly outweigh the problems, as only such kind of multi-modal human brain atlas may provide a starting point from which the complex relationships between structure, function, and connectivity may be explored.

  14. An Open Source Tool to Test Interoperability

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.

    2012-12-01

    Scientists interact with information at various levels from gathering of the raw observed data to accessing portrayed processed quality control data. Geoinformatics tools help scientist on the acquisition, storage, processing, dissemination and presentation of geospatial information. Most of the interactions occur in a distributed environment between software components that take the role of either client or server. The communication between components includes protocols, encodings of messages and managing of errors. Testing of these communication components is important to guarantee proper implementation of standards. The communication between clients and servers can be adhoc or follow standards. By following standards interoperability between components increase while reducing the time of developing new software. The Open Geospatial Consortium (OGC), not only coordinates the development of standards but also, within the Compliance Testing Program (CITE), provides a testing infrastructure to test clients and servers. The OGC Web-based Test Engine Facility, based on TEAM Engine, allows developers to test Web services and clients for correct implementation of OGC standards. TEAM Engine is a JAVA open source facility, available at Sourceforge that can be run via command line, deployed in a web servlet container or integrated in developer's environment via MAVEN. The TEAM Engine uses the Compliance Test Language (CTL) and TestNG to test HTTP requests, SOAP services and XML instances against Schemas and Schematron based assertions of any type of web service, not only OGC services. For example, the OGC Web Feature Service (WFS) 1.0.0 test has more than 400 test assertions. Some of these assertions includes conformance of HTTP responses, conformance of GML-encoded data; proper values for elements and attributes in the XML; and, correct error responses. This presentation will provide an overview of TEAM Engine, introduction of how to test via the OGC Testing web site and

  15. Causal premise semantics.

    PubMed

    Kaufmann, Stefan

    2013-08-01

    The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals.

  16. Semantic Webs and Study Skills.

    ERIC Educational Resources Information Center

    Hoover, John J.; Rabideau, Debra K.

    1995-01-01

    Principles for ensuring effective use of semantic webbing in meeting study skill needs of students with learning problems are noted. Important study skills are listed, along with suggested semantic web topics for which subordinate ideas may be developed. Two semantic webs are presented, illustrating the study skills of multiple choice test-taking…

  17. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  18. A Conceptual Framework to Enhance the Interoperability of Observatories among Countries, Continents and the World

    NASA Astrophysics Data System (ADS)

    Loescher, H.; Fundamental Instrument Unit

    2013-05-01

    , GEO-BON, NutNet, etc.) and domestically, (e.g., NSF-CZO, USDA-LTAR, DOE-NGEE, Soil Carbon Network, etc.), there is a strong and mutual desire to assure interoperability of data. Developing interoperability is the degree by which each of the following is mapped between observatories (entities), defined by linking i) science requirements with science questions, ii) traceability of measurements to nationally and internationally accepted standards, iii) how data product are derived, i.e., algorithms, procedures, and methods, and iv) the bioinformatics which broadly include data formats, metadata, controlled vocabularies, and semantics. Here, we explore the rationale and focus areas for interoperability, the governance and work structures, example projects (NSF-NEON, EU-ICOS, and AU-TERN), and the emergent roles of scientists in these endeavors.

  19. A core observational data model for enhancing the interoperability of ontologically annotated environmental data

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.; Bermudez, L. E.; Bowers, S.; Dibner, P. C.; Gries, C.; Jones, M. B.; McGuinness, D. L.; Cao, H.; Cox, S. J.; Kelling, S.; Lagoze, C.; Lapp, H.; Madin, J.

    2010-12-01

    Research in the environmental sciences often requires accessing diverse data, collected by numerous data providers over varying spatiotemporal scales, incorporating specialized measurements from a range of instruments. These measurements are typically documented using idiosyncratic, disciplinary specific terms, and stored in management systems ranging from desktop spreadsheets to the Cloud, where the information is often further decomposed or stylized in unpredictable ways. This situation creates major informatics challenges for broadly discovering, interpreting, and merging the data necessary for integrative earth science research. A number of scientific disciplines have recognized these issues, and been developing semantically enhanced data storage frameworks, typically based on ontologies, to enable communities to better circumscribe and clarify the content of data objects within their domain of practice. There is concern, however, that cross-domain compatibility of these semantic solutions could become problematic. We describe here our efforts to address this issue by developing a core, unified Observational Data Model, that should greatly facilitate interoperability among the semantic solutions growing organically within diverse scientific domains. Observational Data Models have emerged independently from several distinct scientific communities, including the biodiversity sciences, ecology, evolution, geospatial sciences, and hydrology, to name a few. Informatics projects striving for data integration within each of these domains had converged on identifying "observations" and "measurements" as fundamental abstractions that provide useful "templates" through which scientific data can be linked— at the structural, composited, or even cell value levels— to domain terms stored in ontologies or other forms of controlled vocabularies. The Scientific Observations Network, SONet (http://sonet.ecoinformatics.org) brings together a number of these observational

  20. Semantator: semantic annotator for converting biomedical text to linked data.

    PubMed

    Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G

    2013-10-01

    More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference.

  1. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  2. Semantic Web meets Integrative Biology: a survey.

    PubMed

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.

  3. Environmental Attitudes Semantic Differential.

    ERIC Educational Resources Information Center

    Mehne, Paul R.; Goulard, Cary J.

    This booklet is an evaluation instrument which utilizes semantic differential data to assess environmental attitudes. Twelve concepts are included: regulated access to beaches, urban planning, dune vegetation, wetlands, future cities, reclaiming wetlands for building development, city parks, commercial development of beaches, existing cities,…

  4. Assertiveness through Semantics.

    ERIC Educational Resources Information Center

    Zuercher, Nancy T.

    1983-01-01

    Suggests that connotations of assertiveness do not convey all of its meanings, particularly the components of positive feelings, communication, and cooperation. The application of semantics can help restore the balance. Presents a model for differentiating assertive behavior and clarifying the definition. (JAC)

  5. Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Dumais, Susan T.

    2004-01-01

    Presents a literature review that covers the following topics related to Latent Semantic Analysis (LSA): (1) LSA overview; (2) applications of LSA, including information retrieval (IR), information filtering, cross-language retrieval, and other IR-related LSA applications; (3) modeling human memory, including the relationship of LSA to other…

  6. Are Terminologies Semantically Uninteresting?

    ERIC Educational Resources Information Center

    Jacobson, Sven

    Some semanticists have argued that technical vocabulary or terminology is extralinguistic and therefore semantically uninteresting. However, no boundary exists in linguistic reality between terminology and ordinary vocabulary. Rather, terminologies and ordinary language exist on a continuum, and terminology is therefore a legitimate field for…

  7. Semantic Space Analyst

    2004-04-15

    The Semantic Space Analyst (SSA) is software for analyzing a text corpus, discovering relationships among terms, and allowing the user to explore that information in different ways. It includes features for displaying and laying out terms and relationships visually, for generating such maps from manual queries, for discovering differences between corpora. Data can also be exported to Microsoft Excel.

  8. Semantic physical science

    PubMed Central

    2012-01-01

    The articles in this special issue arise from a workshop and symposium held in January 2012 (Semantic Physical Science’). We invited people who shared our vision for the potential of the web to support chemical and related subjects. Other than the initial invitations, we have not exercised any control over the content of the contributed articles. PMID:22856527

  9. Universal Semantics in Translation

    ERIC Educational Resources Information Center

    Wang, Zhenying

    2009-01-01

    What and how we translate are questions often argued about. No matter what kind of answers one may give, priority in translation should be granted to meaning, especially those meanings that exist in all concerned languages. In this paper the author defines them as universal sememes, and the study of them as universal semantics, of which…

  10. Semantic labeling of digital photos by classification

    NASA Astrophysics Data System (ADS)

    Ciocca, Gianluigi; Cusano, Claudio; Schettini, Raimondo; Brambilla, Carla

    2003-01-01

    The paper addresses the problem of annotating photographs with broad semantic labels. To cope with the great variety of photos available on the WEB we have designed a hierarchical classification strategy which first classifies images as pornographic or not-pornographic. Not-pornographic images are then classified as indoor, outdoor, or close-up. On a database of over 9000 images, mostly downloaded from the web, our method achieves an average accuracy of close to 90%.

  11. Semantic Shot Classification in Sports Video

    NASA Astrophysics Data System (ADS)

    Duan, Ling-Yu; Xu, Min; Tian, Qi

    2003-01-01

    In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.

  12. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    PubMed

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system. PMID:26262183

  13. Restructuring an EHR system and the Medical Markup Language (MML) standard to improve interoperability by archetype technology.

    PubMed

    Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki

    2015-01-01

    In 2001, we developed an EHR system for regional healthcare information inter-exchange and to provide individual patient data to patients. This system was adopted in three regions in Japan. We also developed a Medical Markup Language (MML) standard for inter- and intra-hospital communications. The system was built on a legacy platform, however, and had not been appropriately maintained or updated to meet clinical requirements. To improve future maintenance costs, we reconstructed the EHR system using archetype technology on the Ruby on Rails platform, and generated MML equivalent forms from archetypes. The system was deployed as a cloud-based system for preliminary use as a regional EHR. The system now has the capability to catch up with new requirements, maintaining semantic interoperability with archetype technology. It is also more flexible than the legacy EHR system.

  14. Catalog Federation and Interoperability for Geoinformatics

    NASA Astrophysics Data System (ADS)

    Memon, A.; Lin, K.; Baru, C.

    2008-12-01

    With the increasing proliferation of online resources in the geosciences, including data, tools, and software services, there is also a proliferation of catalogs containing metadata that describe these resources. To realize the vision articulated in the NSF Workshop on Building a National Geoinformatics System, March 2007-where a user can sit at a terminal and easily search, discover, integrate and use distributed geoscience resources-it will be essential that a search request be able to traverse these multiple metadata catalogs. In this paper, we describe our effort at prototyping catalog interoperability across multiple metadata catalogs. An example of a metadata catalog is the one employed in the GEON Project (www.geongrid.org). The central GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, therefore, can be imbedded in any software application. There has been a requirement from some of the GEON collaborators (for example, at the University of Hyderabad, India and the Navajo Technical College, New Mexico) to deploy their own catalogs, to store information about their resources locally, while they publish some of this information for broader access and use. Thus, a search must now be able to span multiple, independent GEON catalogs. Next, some of our collaborators-e.g. GEO Grid (Global Earth Observations Grid) in Japan-are implementing the Catalog Services for the Web (CS-W) standard for their catalog, thereby requiring the search to span across catalogs implemented using the CS-W standard as well. Finally, we have recently deployed a search service to access all EarthScope data products, which are distributed across organizations in Seattle, WA (IRIS), Boulder, CO (UNAVCO), and Potsdam, Germany (ICDP/GFZ). This service essentially implements a virtual catalog (the actual catalogs and data are stored at the remote locations). So, there is the need to incorporate such 3rd

  15. Metaworkflows and Workflow Interoperability for Heliophysics

    NASA Astrophysics Data System (ADS)

    Pierantoni, Gabriele; Carley, Eoin P.

    2014-06-01

    Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They

  16. Moving Controlled Vocabularies into the Semantic Web

    NASA Astrophysics Data System (ADS)

    Thomas, R.; Lowry, R. K.; Kokkinaki, A.

    2015-12-01

    . Having placed Linked Data tooling over a single SPARQL end point the obvious future development for this system is to support semantic interoperability outside NVS by the incorporation of federated SPARQL end points in the USA and Australia during the ODIP II project. 1https://vocab.nerc.ac.uk/sparql 2 https://www.bodc.ac.uk/data/codes_and_formats/vocabulary_search/

  17. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  18. Operational Interoperability Challenges on the Example of GEOSS and WIS

    NASA Astrophysics Data System (ADS)

    Heene, M.; Buesselberg, T.; Schroeder, D.; Brotzer, A.; Nativi, S.

    2015-12-01

    The following poster highlights the operational interoperability challenges on the example of Global Earth Observation System of Systems (GEOSS) and World Meteorological Organization Information System (WIS). At the heart of both systems is a catalogue of earth observation data, products and services but with different metadata management concepts. While in WIS a strong governance with an own metadata profile for the hundreds of thousands metadata records exists, GEOSS adopted a more open approach for the ten million records. Furthermore, the development of WIS - as an operational system - follows a roadmap with committed downwards compatibility while the GEOSS development process is more agile. The poster discusses how the interoperability can be reached for the different metadata management concepts and how a proxy concept helps to couple two different systems which follow a different development methodology. Furthermore, the poster highlights the importance of monitoring and backup concepts as a verification method for operational interoperability.

  19. Data Access, Discovery and Interoperability in the European Context

    NASA Astrophysics Data System (ADS)

    Genova, Francoise

    2015-12-01

    European Virtual Observatory (VO) activities have been coordinated by a series of projects funded by the European Commission. Three pillar were identified: support to the data providers for implementation of their data in the VO framework; support to the astronomical community for their usage of VO-enabled data and tools; technological work for updating the VO framework of interoperability standards and tools. A new phase is beginning with the ASTERICS cluster project. ASTERICS Work Package "Data Access, Discovery and Interoperability" aims at making the data from the ESFRI projects and their pathfinders available for discovery and usage, interoperable in the VO framework and accessible with VO-enabled common tools. VO teams and representatives of ESFRI and pathfinder projects and of EGO/VIRGO are engaged together in the Work Package. ESO is associated to the project which is also working closely with ESA. The three pillars identified for coordinating Europaen VO activities are tackled.

  20. Plugfest 2009: Global Interoperability in Telerobotics and Telemedicine

    PubMed Central

    King, H. Hawkeye; Hannaford, Blake; Kwok, Ka-Wai; Yang, Guang-Zhong; Griffiths, Paul; Okamura, Allison; Farkhatdinov, Ildar; Ryu, Jee-Hwan; Sankaranarayanan, Ganesh; Arikatla, Venkata; Tadano, Kotaro; Kawashima, Kenji; Peer, Angelika; Schauß, Thomas; Buss, Martin; Miller, Levi; Glozman, Daniel; Rosen, Jacob; Low, Thomas

    2014-01-01

    Despite the great diversity of teleoperator designs and applications, their underlying control systems have many similarities. These similarities can be exploited to enable inter-operability between heterogeneous systems. We have developed a network data specification, the Interoperable Telerobotics Protocol, that can be used for Internet based control of a wide range of teleoperators. In this work we test interoperable telerobotics on the global Internet, focusing on the telesurgery application domain. Fourteen globally dispersed telerobotic master and slave systems were connected in thirty trials in one twenty four hour period. Users performed common manipulation tasks to demonstrate effective master-slave operation. With twenty eight (93%) successful, unique connections the results show a high potential for standardizing telerobotic operation. Furthermore, new paradigms for telesurgical operation and training are presented, including a networked surgery trainer and upper-limb exoskeleton control of micro-manipulators. PMID:24748993

  1. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  2. Interoperable and standard e-Health solution over Bluetooth.

    PubMed

    Martinez, I; Del Valle, P; Munoz, P; Trigo, J D; Escayola, J; Martínez-Espronceda, M; Muñoz, A; Serrano, L; Garcia, J

    2010-01-01

    The new paradigm of e-Health demands open sensors and middleware components that permit transparent integration and end-to-end interoperability of new personal health devices. The use of standards seems to be the internationally adopted way to solve these problems. This paper presents the implementation of an end-to-end standards-based e-Health solution. This includes ISO/IEEE11073 standard for the interoperability of the medical devices in the patient environment and EN13606 standard for the interoperable exchange of the Electronic Healthcare Record. The design strictly fulfills all the technical features of the most recent versions of both standards. The implemented prototype has been tested in a laboratory environment to demonstrate its feasibility for its further transfer to the healthcare system.

  3. Metadata behind the Interoperability of Wireless Sensor Networks.

    PubMed

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability.

  4. Metadata behind the Interoperability of Wireless Sensor Networks

    PubMed Central

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability. PMID:22412330

  5. Context-Aware Adaptive Hybrid Semantic Relatedness in Biomedical Science

    NASA Astrophysics Data System (ADS)

    Emadzadeh, Ehsan

    Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1--6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.

  6. Semantic similarity measure in biomedical domain leverage web search engine.

    PubMed

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  7. Mashup of Tools through Interoperability Standards RSS, RDF, KML and XSL

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Kieffer, M.; Kovacs, S.; Falke, S. R.; Husar, R. B.

    2007-12-01

    Considerable effort is being devoted to the development, and testing of interoperability standards for data access, such as the OGC Web Services WMS/WCS/WFS. The federated data system, DataFed, developed as a broad partnership utilizes these data access standards to deliver a rich array of quantitative air quality and geospatial data to any client application. Recent advances in standardization also allow the linking of distributed applications through the exchange of additional data types: (1)The Really Simple Syndication (RSS) standard facilitates the exchange of simple text records (e.g. annotated bookmarks); (2) The Resource Description Framework (RDF) standard allows the formal description of semantically rich data structures; (3) The Google Keyhole Markup Language (KML) is the standard way to encode/render geospatial data and metadata; (4) The EMBED standard facilitates the incorporation of web content from one server into the web page hosted on another server. Use of these standards now permits the easy creation of user-defined application "mashups" that transform/integrate various data/metadata streams. Wikis, originally used to collaboratively write documents, are now also used as a host for integrating mashups of this type. In this paper, we will illustrate these application mashups. We will show an example where the wiki host receives RSS feeds from Del.icio.us, blogs, etc. A semantically enhanced wiki is used to create and manage structured metadata, which then can be shared through the RDF standard feed. Content from Google Maps, videos from YouTube, PPT slides from SlideShare are also integrated into wiki pages through the EMBED standard. Mashups between the DataFed data access system, the wiki and GoogleEarth using KML, XSL and RDF will also be demonstrated.

  8. An Interoperable Framework to Access In-Situ OPeNDAP Data

    NASA Astrophysics Data System (ADS)

    Li, W.; Yang, C.; Li, Z.; Li, J.; Zhu, H.; Xie, J.

    2008-12-01

    A huge amount of in-situ ocean observation and hydrology related data are made available to scientists through a uniform access interface, the OPeNDAP inteface. However, there are few interoperable clients that support the interface, and existing clients only provide data access to a specific OPeNDAP server rather than employ flexible data access mechanisms. Moreover, current data visualization is limited to 2-D, which is not very intuitive for end users. To overcome the shortcomings, we developed a linkage and a client to provide a compatible and interactive data access and visualization interface for both gridded and sequence data from multiple remote OPeNDAP servers providing NetCDF, HDF5 and other data formats. The system 1) to fully understand the data structures, attributes and knowledge of data from different OPeNDAP servers, semantic technique is employed and a semantic mapping table defining the usage conventions helps parsing the given metadata description files. 2) After selecting the variable, time interval and spatial extent, the request constructor is started to organize the constraint expression for subsetting the datasets. 3) The multi- threading enabled downloading mechanism helps to download the subset datasets in the intermediate format-DODS simultaneously. Once all the datasets are downloaded, an applet based java plug-in is able to support 3-D visualization by rendering the data with extended NASA's World Wind. If the data are in a time sequence, an animation is automatically generated and displayed within World Wind. Meanwhile, a KML file is generated automatically for users to visualize data in Google Earth.

  9. Interoperable Archetypes With a Three Folded Terminology Governance.

    PubMed

    Pederson, Rune; Ellingsen, Gunnar

    2015-01-01

    The use of openEHR archetypes increases the interoperability of clinical terminology, and in doing so improves upon the availability of clinical terminology for both primary and secondary purposes. Where clinical terminology is employed in the EPR system, research reports conflicting a results for the use of structuring and standardization as measurements of success. In order to elucidate this concept, this paper focuses on the effort to establish a national repository for openEHR based archetypes in Norway where clinical terminology could be included with benefit for interoperability three folded. PMID:26262236

  10. 75 FR 63462 - Smart Grid Interoperability Standards; Notice of Docket Designation for Smart Grid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-15

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Smart Grid Interoperability Standards; Notice of Docket Designation for Smart Grid Interoperability Standards October 7, 2010. 1. The Energy Independence and Security Act...

  11. 78 FR 10169 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-13

    ... to ensure the security, reliability, and interoperability of communications systems. On March 19... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability...

  12. 78 FR 15722 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... security, reliability, and interoperability of communications systems. On March 19, 2011, the FCC, pursuant... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability...

  13. 78 FR 46582 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ... to ensure the security, reliability, and interoperability of communications systems. On March 19... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability...

  14. 77 FR 67815 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-14

    ..., reliability, and interoperability of communications systems. On March 19, 2011, the FCC, pursuant to the... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability...

  15. 76 FR 4102 - Smart Grid Interoperability Standards; Supplemental Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-24

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Smart Grid Interoperability Standards; Supplemental Notice of Technical... Technical Conference on Smart Grid Interoperability Standards will be held on Monday, January 31,...

  16. 75 FR 417 - National Protection and Programs Directorate; Statewide Communication Interoperability Plan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... SECURITY National Protection and Programs Directorate; Statewide Communication Interoperability Plan...: Statewide Communication Interoperability Plan Implementation Report. Form: Not Applicable. OMB Number: 1670... Emergency Communications Grant Program (IECGP) (6 U.S.C. 579) comply with the Statewide...

  17. 75 FR 21011 - National Protection and Programs Directorate; Statewide Communication Interoperability Plan...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-22

    ... SECURITY National Protection and Programs Directorate; Statewide Communication Interoperability Plan... concerning New Information Collection Request, Statewide Communication Interoperability Plan Implementation... January 5, 2010, at 75 FR 417, for a 60-day public comment period. DHS received no comments. The...

  18. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect

    Gulabani, Teena Pratap

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  19. EXACT2: the semantics of biomedical protocols

    PubMed Central

    2014-01-01

    Background The reliability and reproducibility of experimental procedures is a cornerstone of scientific practice. There is a pressing technological need for the better representation of biomedical protocols to enable other agents (human or machine) to better reproduce results. A framework that ensures that all information required for the replication of experimental protocols is essential to achieve reproducibility. Methods We have developed the ontology EXACT2 (EXperimental ACTions) that is designed to capture the full semantics of biomedical protocols required for their reproducibility. To construct EXACT2 we manually inspected hundreds of published and commercial biomedical protocols from several areas of biomedicine. After establishing a clear pattern for extracting the required information we utilized text-mining tools to translate the protocols into a machine amenable format. We have verified the utility of EXACT2 through the successful processing of previously 'unseen' (not used for the construction of EXACT2) protocols. Results The paper reports on a fundamentally new version EXACT2 that supports the semantically-defined representation of biomedical protocols. The ability of EXACT2 to capture the semantics of biomedical procedures was verified through a text mining use case. In this EXACT2 is used as a reference model for text mining tools to identify terms pertinent to experimental actions, and their properties, in biomedical protocols expressed in natural language. An EXACT2-based framework for the translation of biomedical protocols to a machine amenable format is proposed. Conclusions The EXACT2 ontology is sufficient to record, in a machine processable form, the essential information about biomedical protocols. EXACT2 defines explicit semantics of experimental actions, and can be used by various computer applications. It can serve as a reference model for for the translation of biomedical protocols in natural language into a semantically

  20. Semantic interpretation of nominalizations

    SciTech Connect

    Hull, R.D.; Gomez, F.

    1996-12-31

    A computational approach to the semantic interpretation of nominalizations is described. Interpretation of normalizations involves three tasks: deciding whether the normalization is being used in a verbal or non-verbal sense; disambiguating the normalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be recognized by the presence of modifiers that represent the arguments of the verbal concept. It is these same modifiers which provide the semantic clues to disambiguate the normalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non-verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These methods are tested in the domain of encyclopedic texts and the results are shown.

  1. Living With Semantic Dementia

    PubMed Central

    Sage, Karen; Wilkinson, Ray; Keady, John

    2014-01-01

    Semantic dementia is a variant of frontotemporal dementia and is a recently recognized diagnostic condition. There has been some research quantitatively examining care partner stress and burden in frontotemporal dementia. There are, however, few studies exploring the subjective experiences of family members caring for those with frontotemporal dementia. Increased knowledge of such experiences would allow service providers to tailor intervention, support, and information better. We used a case study design, with thematic narrative analysis applied to interview data, to describe the experiences of a wife and son caring for a husband/father with semantic dementia. Using this approach, we identified four themes: (a) living with routines, (b) policing and protecting, (c) making connections, and (d) being adaptive and flexible. Each of these themes were shared and extended, with the importance of routines in everyday life highlighted. The implications for policy, practice, and research are discussed. PMID:24532121

  2. Practical Semantic Astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Gray, N.; Burke, D.

    2010-01-01

    Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.

  3. Postmarketing Safety Study Tool: A Web Based, Dynamic, and Interoperable System for Postmarketing Drug Surveillance Studies.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B; Gonul, Suat; Yuksel, Mustafa; Invernizzi, Paolo; Thakrar, Bharat; Pacaci, Anil; Cinar, H Alper; Cicekli, Nihan Kesim

    2015-01-01

    Postmarketing drug surveillance is a crucial aspect of the clinical research activities in pharmacovigilance and pharmacoepidemiology. Successful utilization of available Electronic Health Record (EHR) data can complement and strengthen postmarketing safety studies. In terms of the secondary use of EHRs, access and analysis of patient data across different domains are a critical factor; we address this data interoperability problem between EHR systems and clinical research systems in this paper. We demonstrate that this problem can be solved in an upper level with the use of common data elements in a standardized fashion so that clinical researchers can work with different EHR systems independently of the underlying information model. Postmarketing Safety Study Tool lets the clinical researchers extract data from different EHR systems by designing data collection set schemas through common data elements. The tool interacts with a semantic metadata registry through IHE data element exchange profile. Postmarketing Safety Study Tool and its supporting components have been implemented and deployed on the central data warehouse of the Lombardy region, Italy, which contains anonymized records of about 16 million patients with over 10-year longitudinal data on average. Clinical researchers in Roche validate the tool with real life use cases. PMID:26543873

  4. Postmarketing Safety Study Tool: A Web Based, Dynamic, and Interoperable System for Postmarketing Drug Surveillance Studies

    PubMed Central

    Sinaci, A. Anil; Laleci Erturkmen, Gokce B.; Gonul, Suat; Yuksel, Mustafa; Invernizzi, Paolo; Thakrar, Bharat; Pacaci, Anil; Cinar, H. Alper; Cicekli, Nihan Kesim

    2015-01-01

    Postmarketing drug surveillance is a crucial aspect of the clinical research activities in pharmacovigilance and pharmacoepidemiology. Successful utilization of available Electronic Health Record (EHR) data can complement and strengthen postmarketing safety studies. In terms of the secondary use of EHRs, access and analysis of patient data across different domains are a critical factor; we address this data interoperability problem between EHR systems and clinical research systems in this paper. We demonstrate that this problem can be solved in an upper level with the use of common data elements in a standardized fashion so that clinical researchers can work with different EHR systems independently of the underlying information model. Postmarketing Safety Study Tool lets the clinical researchers extract data from different EHR systems by designing data collection set schemas through common data elements. The tool interacts with a semantic metadata registry through IHE data element exchange profile. Postmarketing Safety Study Tool and its supporting components have been implemented and deployed on the central data warehouse of the Lombardy region, Italy, which contains anonymized records of about 16 million patients with over 10-year longitudinal data on average. Clinical researchers in Roche validate the tool with real life use cases. PMID:26543873

  5. Complex Semantic Networks

    NASA Astrophysics Data System (ADS)

    Teixeira, G. M.; Aguiar, M. S. F.; Carvalho, C. F.; Dantas, D. R.; Cunha, M. V.; Morais, J. H. M.; Pereira, H. B. B.; Miranda, J. G. V.

    Verbal language is a dynamic mental process. Ideas emerge by means of the selection of words from subjective and individual characteristics throughout the oral discourse. The goal of this work is to characterize the complex network of word associations that emerge from an oral discourse from a discourse topic. Because of that, concepts of associative incidence and fidelity have been elaborated and represented the probability of occurrence of pairs of words in the same sentence in the whole oral discourse. Semantic network of words associations were constructed, where the words are represented as nodes and the edges are created when the incidence-fidelity index between pairs of words exceeds a numerical limit (0.001). Twelve oral discourses were studied. The networks generated from these oral discourses present a typical behavior of complex networks and their indices were calculated and their topologies characterized. The indices of these networks obtained from each incidence-fidelity limit exhibit a critical value in which the semantic network has maximum conceptual information and minimum residual associations. Semantic networks generated by this incidence-fidelity limit depict a pattern of hierarchical classes that represent the different contexts used in the oral discourse.

  6. 47 CFR 0.192 - Emergency Response Interoperability Center.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Safety and Homeland Security Bureau to develop, recommend, and administer policy goals, objectives... Organization Public Safety and Homeland Security Bureau § 0.192 Emergency Response Interoperability Center. (a... extent permitted by applicable law, the Chief of the Public Safety and Homeland Security Bureau...

  7. 47 CFR 0.192 - Emergency Response Interoperability Center.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Public Safety and Homeland Security Bureau to develop, recommend, and administer policy goals, objectives... Organization Public Safety and Homeland Security Bureau § 0.192 Emergency Response Interoperability Center. (a... extent permitted by applicable law, the Chief of the Public Safety and Homeland Security Bureau...

  8. The role of markup for enabling interoperability in health informatics

    PubMed Central

    McKeever, Steve; Johnson, David

    2015-01-01

    Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realized on a variety of physiological and health care use cases. The last 15 years has seen the rise of very cheap digital storage both on and off site. With the advent of the Internet of Things people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to in silico modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realized. We conclude with a discussion on future possibilities that big data and further standardizations will enable. PMID:26042043

  9. Global Interoperability of Broadband Networks (GIBN): Project Overview

    NASA Technical Reports Server (NTRS)

    DePaula, Ramon P.

    1998-01-01

    Various issues associated with the Global Interoperability of Broadband Networks (GIBN) are presented in viewgraph form. Specific topics include GIBN principles, objectives and goals, and background. GIBN/NASA status, the Transpacific High Definition Video experiment, GIBN experiment selection criteria, satellite industry involvement, and current experiments associated with GIBN are also discussed.

  10. Exploring Interoperability as a Multidimensional Challenge for Effective Emergency Response

    ERIC Educational Resources Information Center

    Santisteban, Hiram

    2010-01-01

    Purpose. The purpose of this research was to further an understanding of how the federal government is addressing the challenges of interoperability for emergency response or crisis management (FEMA, 2009) by informing the development of standards through the review of current congressional law, commissions, studies, executive orders, and…

  11. Toward an Open and Interoperable e-Learning Portal: OEPortal

    ERIC Educational Resources Information Center

    Hsu, Kevin Chihcheng; Yang, Fang-Chuan Ou

    2008-01-01

    With the rapid advance of stand-alone e-learning systems, we believe a sharable and interoperable portal platform capable of integrating various existing learning systems is critical for the future development of e-learning systems. We highlight two problems as the root causes for current ineffective sharing of learning resources: learning object…

  12. Documentation and Reporting of Nutrition - Interoperability, Standards, Practice and Procedures.

    PubMed

    Rotegård, Ann Kristin

    2016-01-01

    Interoperability, fragmentation, standardization and data integrity are key challenges in efforts to improve documentation, streamline reporting and ensure quality of care. This workshop aims at demonstrating and discussing health politics and solutions aimed to improve nutritional status in elderly. PMID:27332331

  13. Interoperability Gap Challenges for Learning Object Repositories & Learning Management Systems

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2011-01-01

    An interoperability gap exists between Learning Management Systems (LMSs) and Learning Object Repositories (LORs). Learning Objects (LOs) and the associated Learning Object Metadata (LOM) that is stored within LORs adhere to a variety of LOM standards. A common LOM standard found in LORs is the Sharable Content Object Reference Model (SCORM)…

  14. The role of markup for enabling interoperability in health informatics.

    PubMed

    McKeever, Steve; Johnson, David

    2015-01-01

    Interoperability is the faculty of making information systems work together. In this paper we will distinguish a number of different forms that interoperability can take and show how they are realized on a variety of physiological and health care use cases. The last 15 years has seen the rise of very cheap digital storage both on and off site. With the advent of the Internet of Things people's expectations are for greater interconnectivity and seamless interoperability. The potential impact these technologies have on healthcare are dramatic: from improved diagnoses through immediate access to a patient's electronic health record, to in silico modeling of organs and early stage drug trials, to predictive medicine based on top-down modeling of disease progression and treatment. We will begin by looking at the underlying technology, classify the various kinds of interoperability that exist in the field, and discuss how they are realized. We conclude with a discussion on future possibilities that big data and further standardizations will enable. PMID:26042043

  15. Putting the School Interoperability Framework to the Test

    ERIC Educational Resources Information Center

    Mercurius, Neil; Burton, Glenn; Hopkins, Bill; Larsen, Hans

    2004-01-01

    The Jurupa Unified School District in Southern California recently partnered with Microsoft, Dell and the Zone Integration Group for the implementation of a School Interoperability Framework (SIF) database repository model throughout the district (Magner 2002). A two-week project--the Integrated District Education Applications System, better known…

  16. Sensor Web Interoperability Testbed Results Incorporating Earth Observation Satellites

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Daniel J.; Alameh, Nadine; Bambacus, Myra; Cappelaere, Pat; Falke, Stefan; Derezinski, Linda; Zhao, Piesheng

    2007-01-01

    This paper describes an Earth Observation Sensor Web scenario based on the Open Geospatial Consortium s Sensor Web Enablement and Web Services interoperability standards. The scenario demonstrates the application of standards in describing, discovering, accessing and tasking satellites and groundbased sensor installations in a sequence of analysis activities that deliver information required by decision makers in response to national, regional or local emergencies.

  17. The Next Generation of Interoperability Agents in Healthcare

    PubMed Central

    Cardoso, Luciana; Marins, Fernando; Portela, Filipe; Santos, Manuel ; Abelha, António; Machado, José

    2014-01-01

    Interoperability in health information systems is increasingly a requirement rather than an option. Standards and technologies, such as multi-agent systems, have proven to be powerful tools in interoperability issues. In the last few years, the authors have worked on developing the Agency for Integration, Diffusion and Archive of Medical Information (AIDA), which is an intelligent, agent-based platform to ensure interoperability in healthcare units. It is increasingly important to ensure the high availability and reliability of systems. The functions provided by the systems that treat interoperability cannot fail. This paper shows the importance of monitoring and controlling intelligent agents as a tool to anticipate problems in health information systems. The interaction between humans and agents through an interface that allows the user to create new agents easily and to monitor their activities in real time is also an important feature, as health systems evolve by adopting more features and solving new problems. A module was installed in Centro Hospitalar do Porto, increasing the functionality and the overall usability of AIDA. PMID:24840351

  18. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial

  19. Requirements Development for Interoperability Simulation Capability for Law Enforcement

    SciTech Connect

    Holter, Gregory M.

    2004-05-19

    The National Counterdrug Center (NCC) was initially authorized by Congress in FY 1999 appropriations to create a simulation-based counterdrug interoperability training capability. As the lead organization for Research and Analysis to support the NCC, the Pacific Northwest National Laboratory (PNNL) was responsible for developing the requirements for this interoperability simulation capability. These requirements were structured to address the hardware and software components of the system, as well as the deployment and use of the system. The original set of requirements was developed through a process of conducting a user-based survey of requirements for the simulation capability, coupled with an analysis of similar development efforts. The user-based approach ensured that existing concerns with respect to interoperability within the law enforcement community would be addressed. Law enforcement agencies within the designated pilot area of Cochise County, Arizona, were surveyed using interviews and ride-alongs during actual operations. The results of this survey were then accumulated, organized, and validated with the agencies to ensure the accuracy of the results. These requirements were then supplemented by adapting operational requirements from existing systems to ensure system reliability and operability. The NCC adopted a development approach providing incremental capability through the fielding of a phased series of progressively more capable versions of the system. This allowed for feedback from system users to be incorporated into subsequent revisions of the system requirements, and also allowed the addition of new elements as needed to adapt the system to broader geographic and geopolitical areas, including areas along the southwest and northwest U.S. borders. This paper addresses the processes used to develop and refine requirements for the NCC interoperability simulation capability, as well as the response of the law enforcement community to the use of

  20. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  1. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  2. Flexible procedural interoperability across security and coalition boundaries using rapidly reconfigurable boundary protection definitions

    NASA Astrophysics Data System (ADS)

    Peach, Nicholas

    2013-05-01

    Existing configuration of boundary protection devices, which validate the content and context of information crossing between security domains, uses a set of accreditor-agreed steps individually agreed for every situation. This has traditionally been a slow and exacting process of negotiation between integrators and accreditors. The Decentralized Operation Procedure (DOP) technique allows interoperability definitions of system interactions to be created as XML files and deployed across the battlefield environment. By extending the security information definitions within the DOP technique, it is intended to provide sufficient incorporated information to allow boundary protection devices to also immediately load and utilize a DOP XML file and then apply established standards of security. This allows boundary devices to be updated with the same dynamism as the deployment of new DOPs and DOP interoperability definitions to also exploit coalitional capabilities having crossed security boundaries. The proposal describes an open and published boundary definition to support the aims of the MOD 23-13 Generic Base Architecture Defense Standard when working with coalition partners. The research aims are; a) to identify each element within a DOP that requires security characteristics to be described; b) create a means to define security characteristics using XML; c) determine whether external validation of an approved DOP requires additional authentication; d) determine the actions that end users will have to perform on boundary protection devices in support of these aims. The paper will present the XML security extensions and the results of a practical implementation achieved through the modification of an existing accredited barrier device.

  3. Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.

    PubMed

    Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin

    2012-01-01

    Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.

  4. Personal semantics: at the crossroads of semantic and episodic memory.

    PubMed

    Renoult, Louis; Davidson, Patrick S R; Palombo, Daniela J; Moscovitch, Morris; Levine, Brian

    2012-11-01

    Declarative memory is usually described as consisting of two systems: semantic and episodic memory. Between these two poles, however, may lie a third entity: personal semantics (PS). PS concerns knowledge of one's past. Although typically assumed to be an aspect of semantic memory, it is essentially absent from existing models of knowledge. Furthermore, like episodic memory (EM), PS is idiosyncratically personal (i.e., not culturally-shared). We show that, depending on how it is operationalized, the neural correlates of PS can look more similar to semantic memory, more similar to EM, or dissimilar to both. We consider three different perspectives to better integrate PS into existing models of declarative memory and suggest experimental strategies for disentangling PS from semantic and episodic memory.

  5. Semantic Roles and Grammatical Relations.

    ERIC Educational Resources Information Center

    Van Valin, Robert D., Jr.

    The nature of semantic roles and grammatical relations are explored from the perspective of Role and Reference Grammar (RRG). It is proposed that unraveling the relational aspects of grammar involves the recognition that semantic roles fall into two types, thematic relations and macroroles, and that grammatical relations are not universal and are…

  6. Indexing by Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Deerwester, Scott; And Others

    1990-01-01

    Describes a new method for automatic indexing and retrieval called latent semantic indexing (LSI). Problems with matching query words with document words in term-based information retrieval systems are discussed, semantic structure is examined, singular value decomposition (SVD) is explained, and the mathematics underlying the SVD model is…

  7. Semantic Tools in Information Retrieval.

    ERIC Educational Resources Information Center

    Rubinoff, Morris; Stone, Don C.

    This report discusses the problem of the meansings of words used in information retrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

  8. Semantic Focus and Sentence Comprehension.

    ERIC Educational Resources Information Center

    Cutler, Anne; Fodor, Jerry A.

    1979-01-01

    Reaction time to detect a phoneme target in a sentence was faster when the target-containing word formed part of the semantic focus of the sentence. Sentence understanding was facilitated by rapid identification of focused information. Active search for accented words can be interpreted as a search for semantic focus. (Author/RD)

  9. Semantic Feature Distinctiveness and Frequency

    ERIC Educational Resources Information Center

    Lamb, Katherine M.

    2012-01-01

    Lexical access is the process in which basic components of meaning in language, the lexical entries (words) are activated. This activation is based on the organization and representational structure of the lexical entries. Semantic features of words, which are the prominent semantic characteristics of a word concept, provide important information…

  10. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  11. Semantic Analysis in Machine Translation.

    ERIC Educational Resources Information Center

    Skorokhodko, E. F.

    1970-01-01

    In many cases machine-translation does not produce satisfactory results within the framework of purely formal (morphological and syntaxic) analysis, particularly, in the case of syntaxic and lexical homonomy. An algorithm for syntaxic-semantic analysis is proposed, and its principles of operation are described. The syntaxico-semantic structure is…

  12. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  13. Hierarchical abstract semantic model for image classification

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    Semantic gap limits the performance of bag-of-visual-words. To deal with this problem, a hierarchical abstract semantics method that builds abstract semantic layers, generates semantic visual vocabularies, measures semantic gap, and constructs classifiers using the Adaboost strategy is proposed. First, abstract semantic layers are proposed to narrow the semantic gap between visual features and their interpretation. Then semantic visual words are extracted as features to train semantic classifiers. One popular form of measurement is used to quantify the semantic gap. The Adaboost training strategy is used to combine weak classifiers into strong ones to further improve performance. For a testing image, the category is estimated layer-by-layer. Corresponding abstract hierarchical structures for popular datasets, including Caltech-101 and MSRC, are proposed for evaluation. The experimental results show that the proposed method is capable of narrowing semantic gaps effectively and performs better than other categorization methods.

  14. Semantic Annotation for Biological Information Retrieval System

    PubMed Central

    Oshaiba, Mohamed Marouf Z.; El Houby, Enas M. F.; Salah, Akram

    2015-01-01

    Online literatures are increasing in a tremendous rate. Biological domain is one of the fast growing domains. Biological researchers face a problem finding what they are searching for effectively and efficiently. The aim of this research is to find documents that contain any combination of biological process and/or molecular function and/or cellular component. This research proposes a framework that helps researchers to retrieve meaningful documents related to their asserted terms based on gene ontology (GO). The system utilizes GO by semantically decomposing it into three subontologies (cellular component, biological process, and molecular function). Researcher has the flexibility to choose searching terms from any combination of the three subontologies. Document annotation is taking a place in this research to create an index of biological terms in documents to speed the searching process. Query expansion is used to infer semantically related terms to asserted terms. It increases the search meaningful results using the term synonyms and term relationships. The system uses a ranking method to order the retrieved documents based on the ranking weights. The proposed system achieves researchers' needs to find documents that fit the asserted terms semantically. PMID:25737720

  15. Component Models for Semantic Web Languages

    NASA Astrophysics Data System (ADS)

    Henriksson, Jakob; Aßmann, Uwe

    Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.

  16. Semantic annotation for biological information retrieval system.

    PubMed

    Oshaiba, Mohamed Marouf Z; El Houby, Enas M F; Salah, Akram

    2015-01-01

    Online literatures are increasing in a tremendous rate. Biological domain is one of the fast growing domains. Biological researchers face a problem finding what they are searching for effectively and efficiently. The aim of this research is to find documents that contain any combination of biological process and/or molecular function and/or cellular component. This research proposes a framework that helps researchers to retrieve meaningful documents related to their asserted terms based on gene ontology (GO). The system utilizes GO by semantically decomposing it into three subontologies (cellular component, biological process, and molecular function). Researcher has the flexibility to choose searching terms from any combination of the three subontologies. Document annotation is taking a place in this research to create an index of biological terms in documents to speed the searching process. Query expansion is used to infer semantically related terms to asserted terms. It increases the search meaningful results using the term synonyms and term relationships. The system uses a ranking method to order the retrieved documents based on the ranking weights. The proposed system achieves researchers' needs to find documents that fit the asserted terms semantically.

  17. Semantic Mediation via Access Broker: the OWS-9 experiment

    NASA Astrophysics Data System (ADS)

    Santoro, Mattia; Papeschi, Fabrizio; Craglia, Massimo; Nativi, Stefano

    2013-04-01

    Even with the use of common data models standards to publish and share geospatial data, users may still face semantic inconsistencies when they use Spatial Data Infrastructures - especially in multidisciplinary contexts. Several semantic mediation solutions exist to address this issue; they span from simple XSLT documents to transform from one data model schema to another, to more complex services based on the use of ontologies. This work presents the activity done in the context of the OGC Web Services Phase 9 (OWS-9) Cross Community Interoperability to develop a semantic mediation solution by enhancing the GEOSS Discovery and Access Broker (DAB). This is a middleware component that provides harmonized access to geospatial datasets according to client applications preferred service interface (Nativi et al. 2012, Vaccari et al. 2012). Given a set of remote feature data encoded in different feature schemas, the objective of the activity was to use the DAB to enable client applications to transparently access the feature data according to one single schema. Due to the flexible architecture of the Access Broker, it was possible to introduce a new transformation type in the configured chain of transformations. In fact, the Access Broker already provided the following transformations: Coordinate Reference System (CRS), spatial resolution, spatial extent (e.g., a subset of a data set), and data encoding format. A new software module was developed to invoke the needed external semantic mediation service and harmonize the accessed features. In OWS-9 the Access Broker invokes a SPARQL WPS to retrieve mapping rules for the OWS-9 schemas: USGS, and NGA schema. The solution implemented to address this problem shows the flexibility and extensibility of the brokering framework underpinning the GEO DAB: new services can be added to augment the number of supported schemas without the need to modify other components and/or software modules. Moreover, all other transformations (CRS

  18. Community-Based Services that Facilitate Interoperability and Intercomparison of Precipitation Datasets from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana

    2010-01-01

    perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.

  19. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    PubMed Central

    2011-01-01

    Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP

  20. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304272

  1. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  2. Smartfiles: An OO approach to data file interoperability

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Vanrosendale, John

    1995-01-01

    Data files for scientific and engineering codes typically consist of a series of raw data values whose descriptions are buried in the programs that interact with these files. In this situation, making even minor changes in the file structure or sharing files between programs (interoperability) can only be done after careful examination of the data file and the I/O statement of the programs interacting with this file. In short, scientific data files lack self-description, and other self-describing data techniques are not always appropriate or useful for scientific data files. By applying an object-oriented methodology to data files, we can add the intelligence required to improve data interoperability and provide an elegant mechanism for supporting complex, evolving, or multidisciplinary applications, while still supporting legacy codes. As a result, scientists and engineers should be able to share datasets with far greater ease, simplifying multidisciplinary applications and greatly facilitating remote collaboration between scientists.

  3. Reconfigurable point-of-care systems designed with interoperability standards.

    PubMed

    Warren, Steve; Yao, Jianchu; Schmitz, Ryan; Lebak, Jeff

    2004-01-01

    Interoperability standards, if properly applied to medical system design, have the potential to decrease the cost of point-of-care monitoring systems while better matching systems to patient needs. This paper presents a brief editorial overview of future monitoring environments, followed by a short listing of smart-home and wearable-device efforts. This is followed by a summary of recent efforts in the Medical Component Design Laboratory at Kansas State University to address interoperability issues in point-of-care systems by incorporating the Bluetooth Host Controller Interface, the IEEE 1073 Medical Information Bus, and Health Level 7 (HL7) into a monitoring system that hosts wearable or nearby wireless devices. This wireless demonstration system includes a wearable electrocardiogram, wearable pulse oximeter, wearable data logger, weight scale, and LabVIEW base station. Data are exchanged between local and remote MySQL databases using the HL7 standard for medical information exchange. PMID:17270979

  4. Interoperability between Publications, Reference Data and Visualisation Tools

    NASA Astrophysics Data System (ADS)

    Allen, Mark G.; Ocvirk, Pierre; Genova, Francoise

    2015-08-01

    Astronomy research is becoming more and more inter-connected, and there is a high expectation for our publications, reference data and tools to be interoperable. Publications are the hard earned final results of scientific endeavour, and technology allows us to enable publications as useable resources, going beyond their traditional role as a readable document. There is strong demand for simple access to the data associated with publications, and that links and references in publications are strongly connected to online resources, and are useable in visualisation tools. We highlight the capabilities of the CDS reference services for interoperability between the reference data obtained from publications, the connections between Journal and literature services, and combination of these data and information in Aladin and other CDS services. (In support of the abstract submitted by P. Ocvirk)

  5. JAUS Operator Control Unit (OCU) interoperability experiment: preparation and results

    NASA Astrophysics Data System (ADS)

    Gray, Sarah A.; Harrison, Joseph F.; Smith, Brian G.

    2004-09-01

    The Joint Architecture for Unmanned Systems (JAUS) Operator Control Units and Payloads Committee (OPC) will be conducting a series of experiments to expedite the production of cost-effective interoperable unmanned systems, user control interfaces, payloads, et cetera. The objective of the initial experiment will be to demonstrate teleoperation of heterogeneous unmanned systems. The experiment will test Level 1 compliance between multiple JAUS subsystems and will include unmanned air, ground, and surface vehicles developed by vendors in the government and commercial sectors. Insight gained from participants initial planning, development, and integration phases will help identify areas of the JAUS standard which can be improved to better facilitate interoperability between Operator Control Units (OCU) and unmanned systems. The process of preparing Mobius, an OCU developed by Autonomous Solutions, Inc., for JAUS Level 1 compliance is discussed.

  6. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  7. Technical Data Interoperability (TDI) Pathfinder Via Emerging Standards

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Gill, Paul; Hill, Bradley; Ibach, Brandon; Jones, Corey; Ungar, David; Barch, Jeffrey; Ingalls, John; Jacoby, Joseph; Manning, Josh; Bengtsson, Kjell; Falls, Mark; Kent, Peter; Heath, Shaun; Kennedy, Steven

    2014-01-01

    The TDI project (TDI) investigates trending technical data standards for applicability to NASA vehicles, space stations, payloads, facilities, and equipment. TDI tested COTS software compatible with a certain suite of related industry standards for capabilities of individual benefits and interoperability. These standards not only esnable Information Technology (IT) efficiencies, but also address efficient structures and standard content for business processes. We used source data from generic industry samples as well as NASA and European Space Agency (ESA) data from space systems.

  8. Secure and interoperable communication infrastructures for PPDR organisations

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Marques, Hugo; Pereira, Luis; Rodriguez, Jonathan; Brouwer, Frank; Bouwers, Bert; Politis, Ilias; Lykourgiotis, Asimakis; Ladas, Alexandros; Adigun, Olayinka; Jelenc, David

    2016-05-01

    The growing number of events affecting public safety and security (PS&S) on a regional scale with potential to grow up to large scale cross border disasters puts an increased pressure on agencies and organisation responsible for PS&S. In order to respond timely and in an adequate manner to such events, Public Protection and Disaster Relief (PPDR) organisations need to cooperate, align their procedures and activities, share the needed information and be interoperable. Existing PPDR/PMR technologies such as TETRA, TETRAPOL or P25, do not currently provide broadband capability nor is expected such technologies to be upgraded in the future. This presents a major limitation in supporting new services and information flows. Furthermore, there is no known standard that addresses interoperability of these technologies. In this contribution the design of a next generation communication infrastructure for PPDR organisations which fulfills the requirements of secure and seamless end-to-end communication and interoperable information exchange within the deployed communication networks is presented. Based on Enterprise Architecture of PPDR organisations, a next generation PPDR network that is backward compatible with legacy communication technologies is designed and implemented, capable of providing security, privacy, seamless mobility, QoS and reliability support for mission-critical Private Mobile Radio (PMR) voice and broadband data services. The designed solution provides a robust, reliable, and secure mobile broadband communications system for a wide variety of PMR applications and services on PPDR broadband networks, including the ability of inter-system, interagency and cross-border operations with emphasis on interoperability between users in PMR and LTE.

  9. TRIGA: Telecommunications Protocol Processing Subsystem Using Reconfigurable Interoperable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh

    2006-01-01

    We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.

  10. On the Feasibility of Interoperable Schemes in Hand Biometrics

    PubMed Central

    Morales, Aythami; González, Ester; Ferrer, Miguel A.

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714

  11. On the feasibility of interoperable schemes in hand biometrics.

    PubMed

    Morales, Aythami; González, Ester; Ferrer, Miguel A

    2012-01-01

    Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.

  12. The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies

    PubMed Central

    2013-01-01

    Background BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. Results The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. Conclusion We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer. PMID:23398680

  13. The Semantic Distance Model of Relevance Assessment.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1998-01-01

    Presents the Semantic Distance Model (SDM) of Relevance Assessment, a cognitive model of the relationship between semantic distance and relevance assessment. Discusses premises of the model such as the subjective nature of information and the metaphor of semantic distance. Empirical results illustrate the effects of semantic distance and semantic…

  14. Mapping the Structure of Semantic Memory

    ERIC Educational Resources Information Center

    Morais, Ana Sofia; Olsson, Henrik; Schooler, Lael J.

    2013-01-01

    Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual's semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals…

  15. Semantic Modeling of Requirements: Leveraging Ontologies in Systems Engineering

    ERIC Educational Resources Information Center

    Mir, Masood Saleem

    2012-01-01

    The interdisciplinary nature of "Systems Engineering" (SE), having "stakeholders" from diverse domains with orthogonal facets, and need to consider all stages of "lifecycle" of system during conception, can benefit tremendously by employing "Knowledge Engineering" (KE) to achieve semantic agreement among all…

  16. Progress Toward Standards for the Seamless Interoperability of Broadband Satellite Communication Networks

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Glover, Daniel R.; vonDeak, Thomas C.; Bhasin, Kul B.

    1998-01-01

    The realization of the full potential of the National Information Infrastructure (NH) and Global Information Infrastructure (GII) requires seamless interoperability of emerging satellite networks with terrestrial networks. This requires a cooperative effort between industry, academia and government agencies to develop and advocate new, satellite-friendly communication protocols and modifications to existing communication protocol standards. These groups have recently come together to actively participating in a number of standards making bodies including: the Internet Engineering Task Force (IETF), the Asynchronous Transfer Mode (ATM) Forum, the International Telecommunication Union (ITU) and the Telecommunication Industry Association MA) to ensure that issues regarding efficient use of these protocols over satellite links are not overlooked. This paper will summarize the progress made toward standards development to achieve seamless integration and accelerate the deployment of multimedia applications.

  17. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms

    PubMed Central

    Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-01-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890

  18. Advances in Using Opensearch for Earth Science Data Discovery and Interoperability

    NASA Astrophysics Data System (ADS)

    Newman, D. J.; Mitchell, A. E.

    2014-12-01

    As per www.opensearch.org: OpenSearch is a collection of simple formats for the sharing of search results A number of organizations (NASA, ESA, CEOS) have began to adopt this standard as a means of allowing both the discovery of earth science data and the aggregation of results from disparate data archives. OpenSearch has proven to be simpler and more effective at achieving these goals than previous efforts (Catalog Service for the web for example). This talk will outline: The basic ideas behind OpenSearch The ways in which we have extended the basic specification to accomodate the Earth Science use case (two-step searching, relevancy ranking, facets) A case-study of the above in action (CWICSmart + IDN OpenSearch + CWIC OpenSearch) The potential for interoperability this simple standard affords A discussion of where we can go in the future

  19. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms.

    PubMed

    Weber, Griffin M; Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-12-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users. PMID:22037890

  20. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms.

    PubMed

    Weber, Griffin M; Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-12-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard data-exchange model and resistance of universities to share information about their faculty have presented barriers to forming an institutionally supported national network. This case report describes an initiative, which, in only 6 months, achieved interoperability among seven major research-networking products at 28 universities by taking an approach that focused on addressing institutional concerns and encouraging their participation. With this necessary groundwork in place, the second phase of this effort can begin, which will expand the network's functionality and focus on the end users.

  1. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    PubMed

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  2. Use of Annotations for Component and Framework Interoperability

    NASA Astrophysics Data System (ADS)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  3. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  4. The impact of electronic health record (EHR) interoperability on immunization information system (IIS) data quality

    PubMed Central

    Woinarowicz, Mary; Howell, Molly

    2016-01-01

    Objectives: To evaluate the impact of electronic health record (EHR) interoperability on the quality of immunization data in the North Dakota Immunization Information System (NDIIS). Methods: NDIIS doses administered data was evaluated for completeness of the patient and dose-level core data elements for records that belong to interoperable and non-interoperable providers. Data was compared at three months prior to electronic health record (EHR) interoperability enhancement to data at three, six, nine and twelve months post-enhancement following the interoperability go live date. Doses administered per month and by age group, timeliness of vaccine entry and the number of duplicate clients added to the NDIIS was also compared, in addition to, immunization rates for children 19 – 35 months of age and adolescents 11 – 18 years of age. Results: Doses administered by both interoperable and non-interoperable providers remained fairly consistent from pre-enhancement through twelve months post-enhancement. Comparing immunization rates for infants and adolescents, interoperable providers had higher rates both pre- and post-enhancement than non-interoperable providers for all vaccines and vaccine series assessed. The overall percentage of doses entered into the NDIIS within one month of administration varied slightly between interoperable and non-interoperable providers; however, there were significant changes between the percentage of doses entered within one day and within one week with the percentage entered within one day increasing and within one week decreasing with interoperability. The number of duplicate client records created by interoperable providers increased from 94 duplicates pre-enhancement to 10,552 at twelve months post-enhancement, while the duplicates from non-interoperable providers only increased from 300 to 637 over the same period. Of the 40 core data elements in the NDIIS, there was some difference in completeness between the interoperable versus

  5. The impact of electronic health record (EHR) interoperability on immunization information system (IIS) data quality.

    PubMed

    Woinarowicz, Mary; Howell, Molly

    2016-01-01

    Objectives: To evaluate the impact of electronic health record (EHR) interoperability on the quality of immunization data in the North Dakota Immunization Information System (NDIIS). Methods: NDIIS doses administered data was evaluated for completeness of the patient and dose-level core data elements for records that belong to interoperable and non-interoperable providers. Data was compared at three months prior to electronic health record (EHR) interoperability enhancement to data at three, six, nine and twelve months post-enhancement following the interoperability go live date. Doses administered per month and by age group, timeliness of vaccine entry and the number of duplicate clients added to the NDIIS was also compared, in addition to, immunization rates for children 19 - 35 months of age and adolescents 11 - 18 years of age. Results: Doses administered by both interoperable and non-interoperable providers remained fairly consistent from pre-enhancement through twelve months post-enhancement. Comparing immunization rates for infants and adolescents, interoperable providers had higher rates both pre- and post-enhancement than non-interoperable providers for all vaccines and vaccine series assessed. The overall percentage of doses entered into the NDIIS within one month of administration varied slightly between interoperable and non-interoperable providers; however, there were significant changes between the percentage of doses entered within one day and within one week with the percentage entered within one day increasing and within one week decreasing with interoperability. The number of duplicate client records created by interoperable providers increased from 94 duplicates pre-enhancement to 10,552 at twelve months post-enhancement, while the duplicates from non-interoperable providers only increased from 300 to 637 over the same period. Of the 40 core data elements in the NDIIS, there was some difference in completeness between the interoperable versus non-interoperable

  6. Semantic perception for ground robotics

    NASA Astrophysics Data System (ADS)

    Hebert, M.; Bagnell, J. A.; Bajracharya, M.; Daniilidis, K.; Matthies, L. H.; Mianzo, L.; Navarro-Serment, L.; Shi, J.; Wellfare, M.

    2012-06-01

    Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.

  7. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  8. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  9. A Personal Health Record System for Diabetes Care Conforming to the ISO 16527 Interoperability Requirements.

    PubMed

    Cerón, Jesús D; Gómez, Guillermo A; López, Diego M; González, Carolina; Blobel, Bernd

    2014-01-01

    A Personal Health Record (PHR) is a health information repository controlled and managed directly by a patient or his/her custodian, or a person interested in his/her own health. PHR System's adoption and compliance with international standards is foremost important because it can help to meet international, national, regional or institutional interoperability and portability policies. In this paper, an interoperable PHR System for supporting the control of type 2 diabetes mellitus is proposed, which meets the mandatory interoperability requirements proposed in the Personal Health Record System Functional Model standard (ISO 16527). After performing a detailed analysis of different applications and platforms for the implementation of electronic Personal Health Records, the adaptation of the Indivo Health open source platform was completed. Interoperability functions were added to this platform by integrating the Mirth Connect platform. The assessment of the platform's interoperability capabilities was carried out by a group of experts, who verified the interoperability requirements proposed in the ISO 16527 standard.

  10. A Public Health Response to Data Interoperability to Prevent Child Maltreatment

    PubMed Central

    2014-01-01

    The sharing of data, particularly health data, has been an important tool for the public health community, especially in terms of data sharing across systems (i.e., interoperability). Child maltreatment is a serious public health issue that could be better mitigated if there were interoperability. There are challenges to addressing child maltreatment interoperability that include the current lack of data sharing among systems, the lack of laws that promote interoperability to address child maltreatment, and the lack of data sharing at the individual level. There are waivers in federal law that allow for interoperability to prevent communicable diseases at the individual level. Child maltreatment has a greater long-term impact than a number of communicable diseases combined, and interoperability should be leveraged to maximize public health strategies to prevent child maltreatment. PMID:25211715

  11. Policy Issues in Accessibility and Interoperability of Scientific Data: Experiences from the Carbon Modeling Field

    NASA Astrophysics Data System (ADS)

    Kishor, P.; Peckham, S. D.; Gower, S. T.; Batzli, S.

    2010-12-01

    Large-scale terrestrial ecosystem modeling is highly parameterized, and requires lots of historical data. Routine model runs can easily utlize hundreds of Gigabytes, even Terabytes of data on tens, perhaps hundreds of parameters. It is a given that no one modeler can or does collect all the required data. All modelers depend upon other scientists, and governmental and research agencies for their data needs. This is where data accessibility and interoperability become crucial for the success of the project. Having well-documented and quality data available in a timely fashion can greatly assist a project's progress, while the converse can bring the project to a standstill, leading to a large amount of wasted staff time and resources. Data accessibility is a complex issue -- at best, it is an unscientific composite of a variety of factors: technological, legal, cultural, semantic, and economic. In reality, it is a concept that most scientists only worry about when they need some data, and mostly never after their project is complete. The exigencies of the vetting, review and publishing processes overtake the long-term view of making one's own data available to others with the same ease and openness that was desired when seeking data from others. This presentation describes our experience with acquiring data for our carbon modeling efforts, dealing with federal, state and local agencies, variety of data formats, some published, some not so easy to find, and documentation that ranges from excellent to non-existent. A set of indicators are proposed to place and determine the accessibility of scientific data -- those we are seeking and those we are producing -- in order to bring some transparency and clarity that can make data acquisition and sharing easier. The paper concludes with a proposal to utilize a free, open and well-recognized data marks such as CC0 (CC-Zero), Public Domain Dedication License, and CC-BY created by Creative Commons that would advertize the

  12. Constructing a semantic predication gold standard from the biomedical literature

    PubMed Central

    2011-01-01

    Background Semantic relations increasingly underpin biomedical text mining and knowledge discovery applications. The success of such practical applications crucially depends on the quality of extracted relations, which can be assessed against a gold standard reference. Most such references in biomedical text mining focus on narrow subdomains and adopt different semantic representations, rendering them difficult to use for benchmarking independently developed relation extraction systems. In this article, we present a multi-phase gold standard annotation study, in which we annotated 500 sentences randomly selected from MEDLINE abstracts on a wide range of biomedical topics with 1371 semantic predications. The UMLS Metathesaurus served as the main source for conceptual information and the UMLS Semantic Network for relational information. We measured interannotator agreement and analyzed the annotations closely to identify some of the challenges in annotating biomedical text with relations based on an ontology or a terminology. Results We obtain fair to moderate interannotator agreement in the practice phase (0.378-0.475). With improved guidelines and additional semantic equivalence criteria, the agreement increases by 12% (0.415 to 0.536) in the main annotation phase. In addition, we find that agreement increases to 0.688 when the agreement calculation is limited to those predications that are based only on the explicitly provided UMLS concepts and relations. Conclusions While interannotator agreement in the practice phase confirms that conceptual annotation is a challenging task, the increasing agreement in the main annotation phase points out that an acceptable level of agreement can be achieved in multiple iterations, by setting stricter guidelines and establishing semantic equivalence criteria. Mapping text to ontological concepts emerges as the main challenge in conceptual annotation. Annotating predications involving biomolecular entities and processes is

  13. Schoolbook Texts: Behavioral Achievement Priming in Math and Language

    PubMed Central

    Engeser, Stefan; Baumann, Nicola; Baum, Ingrid

    2016-01-01

    Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications. PMID:26938446

  14. Schoolbook Texts: Behavioral Achievement Priming in Math and Language.

    PubMed

    Engeser, Stefan; Baumann, Nicola; Baum, Ingrid

    2016-01-01

    Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications.

  15. Problem Solving with General Semantics.

    ERIC Educational Resources Information Center

    Hewson, David

    1996-01-01

    Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)

  16. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  17. Semantic priming from crowded words.

    PubMed

    Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick

    2012-06-01

    Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.

  18. NASA and The Semantic Web

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2005-01-01

    We provide an overview of several ongoing NASA endeavors based on concepts, systems, and technology from the Semantic Web arena. Indeed NASA has been one of the early adopters of Semantic Web Technology and we describe ongoing and completed R&D efforts for several applications ranging from collaborative systems to airspace information management to enterprise search to scientific information gathering and discovery systems at NASA.

  19. Semantic preview benefit during reading.

    PubMed

    Hohenstein, Sven; Kliegl, Reinhold

    2014-01-01

    Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word during the saccade to the preview (boundary paradigm). In the 1st experiment the preview word was semantically related or unrelated to the target. Fixation durations on the target were shorter for semantically related than unrelated previews, consistent with a semantic preview benefit. In the 2nd experiment, half the sentences were presented following the rules of German spelling (i.e., previews and targets were printed with an initial capital letter), and the other half were presented completely in lowercase. A semantic preview benefit was obtained under both conditions. In the 3rd experiment, we introduced 2 further preview conditions, an identical word and a pronounceable nonword, while also manipulating the text contrast. Whereas the contrast had negligible effects, fixation durations on the target were reliably different for all 4 types of preview. Semantic preview benefits were greater for pretarget fixations closer to the boundary (large preview space) and, although not as consistently, for long pretarget fixation durations (long preview time). The results constrain theoretical proposals about eye movement control in reading. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. Data Access Services interoperability in the Geosciences by means of the GI-axe Brokering Framework

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; Santoro, Mattia; Papeschi, Fabrizio; Nativi, Stefano

    2013-04-01

    Many software tools are in use in the different Geosciences domains to the aim of publishing, accessing, evaluating and using available datasets in a service based environment. These tools/services are often domain-specific and usually support a small and disciplinary set of protocols and data models. On the other hand, multidisciplinary applications need to access many of these tools/services belonging to different domains in order to retrieve heterogeneous datasets (e.g. satellite acquired gridded coverages and in-situ sensor time series), then "uniformly process them" and achieve a deeper insight. Moreover datasets, to be easily processed, should be available according to a given Common Grid Environment (CGE): i.e. a geospatial environment characterized by a common spatio-temporal CRS (Coordinate Reference System), resolution, extension and by a common format encoding. Now, the interoperability effort needed by multidisciplinary applications is ordinarily in charge of data providers servers or user clients: in both cases, this represents a high entry barrier. The GI-axe Access Broker addresses this interoperability issue by taking charge of the needed implementation effort. It acts as an intermediation service between the User Clients and the Data Provider Services, placing itself in a third party (Broker) Layer. Indeed the Access Broker can access datasets available through well known access services in use by the Geosciences communities (e.g. OGC WCS, WMS, WFS, OPeNDAP, FTP, REST APIs, …) and republish them according to the application client interfaces. Moreover, GI-axe transforms datasets according to the a CGE specified by Users. In doing so it may resort to external processing services already in use by the community, supplementing the functionalities already supported by the data provider services. The external processing services list can be configured by Users. GI-axe is also a flexible framework, composed of extensible components. This architecture

  1. Making OGC standards work - interoperability testing between meteorological web services

    NASA Astrophysics Data System (ADS)

    Siemen, Stephan; Little, Chris; Voidrot, Marie-Françoise

    2015-04-01

    The Meteorology and Oceanography Domain Working Group (Met Ocean DWG) is a community orientated working group of the Open Geospatial Consortium (OGC). The group does not directly revise OGC standards, but rather enables collaboration and communication between groups with meteorological and oceanographic interests. The Met Ocean DWG maintains a list of topics of interest to the meteorological and oceanographic communities for discussion, prioritises activities, defining feedback to the OGC Standards Working Groups (SWG), and performing interoperability experiments. One of the activities of the MetOcean DWG is the definition of Best Practices documents for common OGC standards, such as WMS and WCS. This is necessary since meteorological data has additional complexities in time, elevation and multi models runs including ensembles. To guarantee interoperability in practice it is important to test each other systems and ensure standards are implemented correctly, but also make recommendations to the DWG on the establishment of Best Practices guides. The European Working Group on Operational meteorological Workstations (EGOWS) was founded in 1990 as an informal forum for people working in the development field of operational meteorological workstations. The annual EGOWS meeting offers an excellent platform for exchanging information and furthering co-operation among the experts from NMS's, ECMWF and other institutes in the work with OGC standards. The presentation will give an update of the testing, which was being done during the June 2014 EGOWS meeting in Oslo and what has happen since. The presenter will also give an overview of the online resources to follow the tests and how interested parties can contribute to future interoperability tests.

  2. Progress of Interoperability in Planetary Research for Geospatial Data Analysis

    NASA Astrophysics Data System (ADS)

    Hare, T. M.; Gaddis, L. R.

    2015-12-01

    For nearly a decade there has been a push in the planetary science community to support interoperable methods of accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (i.e., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized image formats that retain geographic information (e.g., GeoTiff, GeoJpeg2000), digital geologic mapping conventions, planetary extensions for symbols that comply with U.S. Federal Geographic Data Committee cartographic and geospatial metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they have been modified to support the planetary domain. The motivation to support common, interoperable data format and delivery standards is not only to improve access for higher-level products but also to address the increasingly distributed nature of the rapidly growing volumes of data. The strength of using an OGC approach is that it provides consistent access to data that are distributed across many facilities. While data-steaming standards are well-supported by both the more sophisticated tools used in Geographic Information System (GIS) and remote sensing industries, they are also supported by many light-weight browsers which facilitates large and small focused science applications and public use. Here we provide an

  3. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  4. Language interoperability mechanisms for high-performance scientific applications

    SciTech Connect

    Cleary, A; Kohn, S; Smith, S G; Smolinski, B

    1998-09-18

    Language interoperability is a difficult problem facing the developers and users of large numerical software packages. Language choices often hamper the reuse and sharing of numerical libraries, especially in a scientific computing environment that uses a breadth of programming languages, including C, c ++, Java, various Fortran dialects, and scripting languages such as Python. In this paper, we propose a new approach to langauge interoparability for high-performance scientific applications based on Interface Definition Language (IDL) techniques. We investigate the modifications necessary to adopt traditional IDL approaches for use by the scientific community, including IDL extensions for numerical computing and issues involved in mapping IDLs to Fortran 77 and Fortran 90.

  5. CCSDS Spacecraft Monitor and Control Mission Operations Interoperability Prototype

    NASA Technical Reports Server (NTRS)

    Lucord, Steve; Martinez, Lindolfo

    2009-01-01

    We are entering a new era in space exploration. Reduced operating budgets require innovative solutions to leverage existing systems to implement the capabilities of future missions. Custom solutions to fulfill mission objectives are no longer viable. Can NASA adopt international standards to reduce costs and increase interoperability with other space agencies? Can legacy systems be leveraged in a service oriented architecture (SOA) to further reduce operations costs? The Operations Technology Facility (OTF) at the Johnson Space Center (JSC) is collaborating with Deutsches Zentrum fur Luft- und Raumfahrt (DLR) to answer these very questions. The Mission Operations and Information Management Services Area (MOIMS) Spacecraft Monitor and Control (SM&C) Working Group within the Consultative Committee for Space Data Systems (CCSDS) is developing the Mission Operations standards to address this problem space. The set of proposed standards presents a service oriented architecture to increase the level of interoperability among space agencies. The OTF and DLR are developing independent implementations of the standards as part of an interoperability prototype. This prototype will address three key components: validation of the SM&C Mission Operations protocol, exploration of the Object Management Group (OMG) Data Distribution Service (DDS), and the incorporation of legacy systems in a SOA. The OTF will implement the service providers described in the SM&C Mission Operation standards to create a portal for interaction with a spacecraft simulator. DLR will implement the service consumers to perform the monitor and control of the spacecraft. The specifications insulate the applications from the underlying transport layer. We will gain experience with a DDS transport layer as we delegate responsibility to the middleware and explore transport bridges to connect disparate middleware products. A SOA facilitates the reuse of software components. The prototype will leverage the

  6. Interoperability design of personal health information import service.

    PubMed

    Tuomainen, Mika; Mykkänen, Juha

    2012-01-01

    Availability of personal health information for individual use from professional patient records is an important success factor for personal health information management (PHIM) solutions such as personal health records. In this paper we focus on this crucial part of personal wellbeing information management splutions and report the interoperability design of personal information import service. Key requirements as well as design factors for interfaces between PHRs and EPRs are discussed. Open standards, low implementation threshold and the acknowledgement of local market and conventions are emphasized in the design.

  7. Leveraging Python Interoperability Tools to Improve Sapphire's Usability

    SciTech Connect

    Gezahegne, A; Love, N S

    2007-12-10

    The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.

  8. ODIP - Ocean Data Interoperability Platform - developing interoperabilty Pilot project 1

    NASA Astrophysics Data System (ADS)

    Schaap, D.

    2014-12-01

    Europe, the USA, Australia and IOC/IODE are making significant progress in facilitating the discovery and access of marine data through the development, implementation, population and operation of national, regional or international distributed ocean and marine observing and data management infrastructures such as SeaDataNet, Geo-Seas, IOOS, the Australian Ocean Portal and the IODE Ocean Data Portal. All of these developments are resulting in the development and implementation of standards for the formats of metadata, data, data products, quality control methods and flags, common vocabularies. They are also providing services for data discovery, viewing and downloading, and software tools for editing, conversions, communication, analysis and presentation, all of which are increasingly being adopted and used by their national and regional marine communities.The Ocean Data Interoperability Platform (ODIP)project is supported by the EU FP7 Research Infrastructures programme, National Science Foundation (USA) and Australian government and has started 1st October 2012. ODIP includes all the major organisations engaged in ocean data management in EU, US, and Australia. ODIP is also supported by the IOC -IODE, closely linking this activity with its Ocean Data Portal (ODP) and Ocean Data Standards (ODS) projects.The ODIP platform aims to ease interoperability between the regional marine data management infrastructures. Therefore it facilitates an organised dialogue between the key infrastructure representatives by means of publishing best practice, organising international workshops and fostering the development of common standards and interoperability solutions. These are evaluated and tested by means of prototype projects. The ODIP Prototype project 1 aims at establishing interoperability between the regional EU, USA and Australia data discovery and access services (SeaDataNet CDI, US NODC, and IMOS MCP) and contributing to the global GEOSS and IODE-ODP Portals. Use is

  9. Exploiting and developing interoperability between multidisciplinary environmental research infrastructures in Europe - step toward international collaboration

    NASA Astrophysics Data System (ADS)

    Sorvari, S.; Asmi, A.; Konijn, J.; Pursula, A.; Los, W.; Laj, P.; Kutsch, W. L.

    2014-12-01

    Environmental Research infrastructures are long-term facilities, resources, and related services that are used by research communities to conduct environmental research in their respective fields. The focus of the European environmental Research Infrastructures is in in-situ or short-range remote sensing infrastructures. Each environmental research infrastructure (RI) has its own particular set of science questions and foci that it must solve to achieve its objectives; however every RI is also providing its data and services to the wider user communities and thus contributing to the wider, trans- and interdisciplinary science questions and grand environmental challenges. Thus, there are many issues that most of the RIs share, e.g. data collection, preservation, quality control, integration and availability, as well as providing the computational capability to researchers. ENVRI - Common operation of European Research Infrastructures - project was a collaborative action of major European Environmental RIs working towards increased cooperation and interoperability between the infrastructures (www.envri.eu). From the technological point-of-view, one of the major results is the development of common Environmental RIs Reference Model, which is a tool to effectively enhance the interoperability among RIs. In addition to common technical solutions, also cultural and human related topics need to be tackled in parallel with the technical solutions. Topics such as open access, data policy issues (licenses, citation agreements, IPR agreements), technologies for machine-machine interaction, workflows, metadata, data annotations, and the training of the data scientist and research generalist to make it all work and implemented. These three interdependent resource capitals (technological incl. ENVRI Reference Model, cultural and human capitals) will be discussed in the presentation.

  10. Semantic photo synthesis

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew; Brostow, G. J.; Shotton, J.; Kwatra, V.; Cipolla, R.

    2007-02-01

    Composite images are synthesized from existing photographs by artists who make concept art, e.g. storyboards for movies or architectural planning. Current techniques allow an artist to fabricate such an image by digitally splicing parts of stock photographs. While these images serve mainly to "quickly" convey how a scene should look, their production is laborious. We propose a technique that allows a person to design a new photograph with substantially less effort. This paper presents a method that generates a composite image when a user types in nouns, such as "boat" and "sand." The artist can optionally design an intended image by specifying other constraints. Our algorithm formulates the constraints as queries to search an automatically annotated image database. The desired photograph, not a collage, is then synthesized using graph-cut optimization, optionally allowing for further user interaction to edit or choose among alternative generated photos. Our results demonstrate our contributions of (1) a method of creating specific images with minimal human effort, and (2) a combined algorithm for automatically building an image library with semantic annotations from any photo collection.

  11. Web Image Re-Ranking UsingQuery-Specific Semantic Signatures.

    PubMed

    Wang, Xiaogang; Qiu, Shi; Liu, Ke; Tang, Xiaoou

    2014-04-01

    Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images are first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images' semantic meanings which interpret users' search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods.

  12. Building Interoperable Learning Objects Using Reduced Learning Object Metadata

    ERIC Educational Resources Information Center

    Saleh, Mostafa S.

    2005-01-01

    The new e-learning generation depends on Semantic Web technology to produce learning objects. As the production of these components is very costly, they should be produced and registered once, and reused and adapted in the same context or in other contexts as often as possible. To produce those components, developers should use learning standards…

  13. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The

  14. Challenges in Microbial Database Interoperability Interagency Microbe Project Working Group

    SciTech Connect

    Critchlow, T

    2001-11-21

    Currently, data of interest to microbial researchers is spread across hundreds of web-accessible data sources, each with a unique interface and data format. Researchers interact with a few of these sites when they analyze their data, but are not able to utilize the majority of them on a regular basis. There are two significant challenges that must be overcome to integrate this environment and allow researchers to efficiently perform data analysis across the entire set of relevant data, or at least a significant portion of it. The first is to provide consistent access to the large numbers of distributed, heterogeneous data sets that are currently distributed over the web. The second is to define the semantics of the data provided by the individual sites in such a way that semantic conflicts can be identified and, ideally, resolved. The first step in establishing any integrated environment, from a data warehouse to a multi-database system, is provide consistent access to all of the relevant sources. While the type of access required will vary based on the integration strategy chosen--for example federated systems use query-based access while warehouses may prefer access to the underlying database--the essence of this challenge remains the same. Thus, without sacrificing generality, the remainder of this discussion focuses on query-based access. Each data source independently determines the queries that it supports, how it will answer them, and the interface that it will use to make them. Even when the same query capability is provided by different sources the details of the interface are usually different. For example, while many sequence data sources support blast searches, they differ in the parameter names, available options, script locations, etc. These differences are not restricted solely to input parameters; the query results returned by different sources also vary dramatically, with some sources returning XML, others preformatted text, and still others a

  15. ACTS 118x: High Speed TCP Interoperability Testing

    NASA Technical Reports Server (NTRS)

    Brooks, David E.; Buffinton, Craig; Beering, Dave R.; Welch, Arun; Ivancic, William D.; Zernic, Mike; Hoder, Douglas J.

    1999-01-01

    With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer over satellite links using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The NASA's ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine to what extent the protocol can be optimized over a 622 Mbps satellite link. Through partnerships with the government technology oriented labs, computer, telecommunication, and satellite industries NASA Glenn was able to: (1) promote the development of interoperable, high-performance TCP/IP implementations across multiple computing / operating platforms; (2) work with the satellite industry to answer outstanding questions regarding the use of standard protocols (TCP/IP and ATM) for the delivery of advanced data services, and for use in spacecraft architectures; and (3) conduct a series of TCP/IP interoperability tests over OC12 ATM over a satellite network in a multi-vendor environment using ACTS. The experiments' various network configurations and the results are presented.

  16. Designing for Change: Interoperability in a scaling and adapting environment

    NASA Astrophysics Data System (ADS)

    Yarmey, L.

    2015-12-01

    The Earth Science cyberinfrastructure landscape is constantly changing. Technologies advance and technical implementations are refined or replaced. Data types, volumes, packaging, and use cases evolve. Scientific requirements emerge and mature. Standards shift while systems scale and adapt. In this complex and dynamic environment, interoperability remains a critical component of successful cyberinfrastructure. Through the resource- and priority-driven iterations on systems, interfaces, and content, questions fundamental to stable and useful Earth Science cyberinfrastructure arise. For instance, how are sociotechnical changes planned, tracked, and communicated? How should operational stability balance against 'new and shiny'? How can ongoing maintenance and mitigation of technical debt be managed in an often short-term resource environment? The Arctic Data Explorer is a metadata brokering application developed to enable discovery of international, interdisciplinary Arctic data across distributed repositories. Completely dependent on interoperable third party systems, the Arctic Data Explorer publicly launched in 2013 with an original 3000+ data records from four Arctic repositories. Since then the search has scaled to 25,000+ data records from thirteen repositories at the time of writing. In the final months of original project funding, priorities shift to lean operations with a strategic eye on the future. Here we present lessons learned from four years of Arctic Data Explorer design, development, communication, and maintenance work along with remaining questions and potential directions.

  17. Benefits of Linked Data for Interoperability during Crisis Management

    NASA Astrophysics Data System (ADS)

    Roller, R.; Roes, J.; Verbree, E.

    2015-08-01

    Floodings represent a permanent risk to the Netherlands in general and to her power supply in particular. Data sharing is essential within this crisis scenario as a power cut affects a great variety of interdependant sectors. Currently used data sharing systems have been shown to hamper interoperability between stakeholders since they lack flexibility and there is no consensus in term definitions and interpretations. The study presented in this paper addresses these challenges by proposing a new data sharing solution based on Linked Data, a method of interlinking data points in a structured way on the web. A conceptual model for two data sharing parties in a flood-caused power cut crisis management scenario was developed to which relevant data were linked. The analysis revealed that the presented data sharing solution burderns its user with extra costs in the short run, but saves resources in the long run by overcoming interoperability problems of the legacy systems. The more stakeholders adopt Linked Data the stronger its benefits for data sharing will become.

  18. PyMOOSE: Interoperable Scripting in Python for MOOSE.

    PubMed

    Ray, Subhasis; Bhalla, Upinder S

    2008-01-01

    Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators.

  19. Interoperability portcullises and technology battering rams for distributed simulation

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    1997-06-01

    The construction, execution, and analysis of application- oriented simulations is difficult; the integration, coordinated execution, and after action review of heterogeneous distributed simulations can be overwhelming. Economy, risk mitigation, and just plain common sense compel us to utilize legacy simulations but discrepancies in controllability, fidelity, implementation paradigm, algorithms, representations, time management, construction, etc. tend to negate any potential gain. While several generations of interoperability approaches and associated standards have emerged and matured, even they have been limited in their ability to accommodate disparate classes of simulations. Within the permitted scope of this paper, a taxonomy for the most common interoperability issues (portcullises) for distributed simulation is developed. Part of this identification process will consist of establishing contexts and/or prerequisites for the issues, e.g. under what conditions are the issues actually issues at all. As a result, the prioritization will become application dependent. Methods for resolving the issues (battering rams), couched in the form of case studies, are subsequently presented to close the circle. Sources will include industry and government state-of- the-practice, academic state-of-the-art, and our own broad experience. Specific topics to be discussed include application philosophy, the integration of live entities, investigative versus analytical simulation, implications of human-in-the-loop, mixed and/or variable fidelity, heterogeneous time management schemes, current and emerging distributed simulation standards, simulation/exercise management, and control and data distribution. Discussion will focus heavily on examples and experience.

  20. An interoperability test framework for HL7-based systems.

    PubMed

    Namli, Tuncay; Aluc, Gunes; Dogac, Asuman

    2009-05-01

    Health Level Seven (HL7) is a prominent messaging standard in the eHealth domain, and with HL7 v2, it addresses only the messaging layer. However, HL7 implementations also deal with the other layers of interoperability, namely the business process layer and the communication layer. This need is addressed in HL7 v3 by providing a number of normative transport specification profiles. Furthermore, there are storyboards describing HL7 v3 message choreographies between specific roles in specific events. Having alternative transport protocols and descriptive message choreographies introduces great flexibility in implementing HL7 standards, yet, this brings in the need for test frameworks that can accommodate different protocols and permit the dynamic definition of test scenarios. In this paper, we describe a complete test execution framework for HL7-based systems that provides high-level constructs allowing dynamic set up of test scenarios involving all the layers in the interoperability stack. The computer-interpretable test description language developed offers a configurable system with pluggable adaptors. The Web-based GUIs make it possible to test systems over the Web anytime, anywhere, and with any party willing to do so.

  1. An interoperability test framework for HL7-based systems.

    PubMed

    Namli, Tuncay; Aluc, Gunes; Dogac, Asuman

    2009-05-01

    Health Level Seven (HL7) is a prominent messaging standard in the eHealth domain, and with HL7 v2, it addresses only the messaging layer. However, HL7 implementations also deal with the other layers of interoperability, namely the business process layer and the communication layer. This need is addressed in HL7 v3 by providing a number of normative transport specification profiles. Furthermore, there are storyboards describing HL7 v3 message choreographies between specific roles in specific events. Having alternative transport protocols and descriptive message choreographies introduces great flexibility in implementing HL7 standards, yet, this brings in the need for test frameworks that can accommodate different protocols and permit the dynamic definition of test scenarios. In this paper, we describe a complete test execution framework for HL7-based systems that provides high-level constructs allowing dynamic set up of test scenarios involving all the layers in the interoperability stack. The computer-interpretable test description language developed offers a configurable system with pluggable adaptors. The Web-based GUIs make it possible to test systems over the Web anytime, anywhere, and with any party willing to do so. PMID:19304492

  2. PyMOOSE: Interoperable Scripting in Python for MOOSE.

    PubMed

    Ray, Subhasis; Bhalla, Upinder S

    2008-01-01

    Python is emerging as a common scripting language for simulators. This opens up many possibilities for interoperability in the form of analysis, interfaces, and communications between simulators. We report the integration of Python scripting with the Multi-scale Object Oriented Simulation Environment (MOOSE). MOOSE is a general-purpose simulation system for compartmental neuronal models and for models of signaling pathways based on chemical kinetics. We show how the Python-scripting version of MOOSE, PyMOOSE, combines the power of a compiled simulator with the versatility and ease of use of Python. We illustrate this by using Python numerical libraries to analyze MOOSE output online, and by developing a GUI in Python/Qt for a MOOSE simulation. Finally, we build and run a composite neuronal/signaling model that uses both the NEURON and MOOSE numerical engines, and Python as a bridge between the two. Thus PyMOOSE has a high degree of interoperability with analysis routines, with graphical toolkits, and with other simulators. PMID:19129924

  3. Leveraging electronic healthcare record standards and semantic web technologies for the identification of patient cohorts

    PubMed Central

    Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto; Marcos, Mar; Legaz-García, María del Carmen; Moner, David; Torres-Sospedra, Joaquín; Esteban-Gil, Angel; Martínez-Salvador, Begoña; Robles, Montserrat

    2013-01-01

    Background The secondary use of electronic healthcare records (EHRs) often requires the identification of patient cohorts. In this context, an important problem is the heterogeneity of clinical data sources, which can be overcome with the combined use of standardized information models, virtual health records, and semantic technologies, since each of them contributes to solving aspects related to the semantic interoperability of EHR data. Objective To develop methods allowing for a direct use of EHR data for the identification of patient cohorts leveraging current EHR standards and semantic web technologies. Materials and methods We propose to take advantage of the best features of working with EHR standards and ontologies. Our proposal is based on our previous results and experience working with both technological infrastructures. Our main principle is to perform each activity at the abstraction level with the most appropriate technology available. This means that part of the processing will be performed using archetypes (ie, data level) and the rest using ontologies (ie, knowledge level). Our approach will start working with EHR data in proprietary format, which will be first normalized and elaborated using EHR standards and then transformed into a semantic representation, which will be exploited by automated reasoning. Results We have applied our approach to protocols for colorectal cancer screening. The results comprise the archetypes, ontologies, and datasets developed for the standardization and semantic analysis of EHR data. Anonymized real data have been used and the patients have been successfully classified by the risk of developing colorectal cancer. Conclusions This work provides new insights in how archetypes and ontologies can be effectively combined for EHR-driven phenotyping. The methodological approach can be applied to other problems provided that suitable archetypes, ontologies, and classification rules can be designed. PMID:23934950

  4. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  5. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property.

  6. CINERGI: Community Inventory of EarthCube Resources for Geoscience Interoperability

    NASA Astrophysics Data System (ADS)

    Zaslavsky, Ilya; Bermudez, Luis; Grethe, Jeffrey; Gupta, Amarnath; Hsu, Leslie; Lehnert, Kerstin; Malik, Tanu; Richard, Stephen; Valentine, David; Whitenack, Thomas

    2014-05-01

    Organizing geoscience data resources to support cross-disciplinary data discovery, interpretation, analysis and integration is challenging because of different information models, semantic frameworks, metadata profiles, catalogs, and services used in different geoscience domains, not to mention different research paradigms and methodologies. The central goal of CINERGI, a new project supported by the US National Science Foundation through its EarthCube Building Blocks program, is to create a methodology and assemble a large inventory of high-quality information resources capable of supporting data discovery needs of researchers in a wide range of geoscience domains. The key characteristics of the inventory are: 1) collaboration with and integration of metadata resources from a number of large data facilities; 2) reliance on international metadata and catalog service standards; 3) assessment of resource "interoperability-readiness"; 4) ability to cross-link and navigate data resources, projects, models, researcher directories, publications, usage information, etc.; 5) efficient inclusion of "long-tail" data, which are not appearing in existing domain repositories; 6) data registration at feature level where appropriate, in addition to common dataset-level registration, and 7) integration with parallel EarthCube efforts, in particular focused on EarthCube governance, information brokering, service-oriented architecture design and management of semantic information. We discuss challenges associated with accomplishing CINERGI goals, including defining the inventory scope; managing different granularity levels of resource registration; interaction with search systems of domain repositories; explicating domain semantics; metadata brokering, harvesting and pruning; managing provenance of the harvested metadata; and cross-linking resources based on the linked open data (LOD) approaches. At the higher level of the inventory, we register domain-wide resources such as domain

  7. Understanding Software Interoperability in a Technology-Supported System of Education.

    ERIC Educational Resources Information Center

    Rowley, Kurt

    1995-01-01

    As technical compatibility standards have become critical in business and industrial computing, educational software interoperability is rapidly becoming an issue for users and developers of educational information systems. New interoperability initiatives are under way in library automation, higher education information services, and K-12…

  8. Examining the Relationship between Electronic Health Record Interoperability and Quality Management

    ERIC Educational Resources Information Center

    Purcell, Bernice M.

    2013-01-01

    A lack of interoperability impairs data quality among health care providers' electronic health record (EHR) systems. The problem is whether the International Organization for Standardization (ISO) 9000 principles relate to the problem of interoperability in implementation of EHR systems. The purpose of the nonexperimental quantitative…

  9. Evaluation of Interoperability Protocols in Repositories of Electronic Theses and Dissertations

    ERIC Educational Resources Information Center

    Hakimjavadi, Hesamedin; Masrek, Mohamad Noorman

    2013-01-01

    Purpose: The purpose of this study is to evaluate the status of eight interoperability protocols within repositories of electronic theses and dissertations (ETDs) as an introduction to further studies on feasibility of deploying these protocols in upcoming areas of interoperability. Design/methodology/approach: Three surveys of 266 ETD…

  10. IT Labs Proof-of-Concept Project: Technical Data Interoperability (TDI) Pathfinder Via Emerging Standards

    NASA Technical Reports Server (NTRS)

    Conroy, Mike; Gill, Paul; Ingalls, John; Bengtsson, Kjell

    2014-01-01

    No known system is in place to allow NASA technical data interoperability throughout the whole life cycle. Life Cycle Cost (LCC) will be higher on many developing programs if action isn't taken soon to join disparate systems efficiently. Disparate technical data also increases safety risks from poorly integrated elements. NASA requires interoperability and industry standards, but breaking legacy ways is a challenge.

  11. An open platform for promoting interoperability in solar system sciences

    NASA Astrophysics Data System (ADS)

    Csillaghy, André; Aboudarham, Jean; Berghmans, David; Jacquey, Christian

    2013-04-01

    The European coordination project CASSIS is promoting the creation of an integrated data space that will facilitate science across community boundaries in solar system sciences. Many disciplines may need to use the same data set to support scientific research, although the way they are used may depend on the project and on the particular piece of science. Often, access is hindered because of differences in the way the different communities describe, store their data, as well as how they make them accessible. Working towards this goal, we have set up an open collaboration platform, www.explorespace.eu, that can serve as a hub for discovering and developing interoperability resources in the communities involved. The platform is independent of the project and will be maintained well after the end of the funding. As a first step, we have captured the description of services already provided by the community. The openness of the collaboration platform should allow to discuss with all stakeholders ways to make key types of metadata and derived products more complete and coherent and thus more usable across the domain boundaries. Furthermore, software resources and discussions should help facilitating the development of interoperable services. The platform, along with the database of services, address the following questions, which we consider crucial for promoting interoperability: • Current extent of the data space coverage: What part of the common data space is already covered by the existing interoperable services in terms of data access. In other words, what data, from catalogues as well as from raw data, can be reached by an application through standard protocols today? • Needed extension of the data space coverage: What would be needed to extend the data space coverage? In other words, how can the currently accessible data space be extended by adding services? • Missing services: What applications / services are still missing and need to be developed? This is

  12. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care

    PubMed Central

    Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947

  13. Data federation in the Biomedical Informatics Research Network: tools for semantic annotation and query of distributed multiscale brain data.

    PubMed

    Bug, William; Astahkov, Vadim; Boline, Jyl; Fennema-Notestine, Christine; Grethe, Jeffrey S; Gupta, Amarnath; Kennedy, David N; Rubin, Daniel L; Sanders, Brian; Turner, Jessica A; Martone, Maryann E

    2008-01-01

    The broadly defined mission of the Biomedical Informatics Research Network (BIRN, www.nbirn.net) is to better understand the causes human disease and the specific ways in which animal models inform that understanding. To construct the community-wide infrastructure for gathering, organizing and managing this knowledge, BIRN is developing a federated architecture for linking multiple databases across sites contributing data and knowledge. Navigating across these distributed data sources requires a shared semantic scheme and supporting software framework to actively link the disparate repositories. At the core of this knowledge organization is BIRNLex, a formally-represented ontology facilitating data exchange. Source curators enable database interoperability by mapping their schema and data to BIRNLex semantic classes thereby providing a means to cast BIRNLex-based queries against specific data sources in the federation. We will illustrate use of the source registration, term mapping, and query tools.

  14. Semantic graphs and associative memories.

    PubMed

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  15. Semantic graphs and associative memories

    NASA Astrophysics Data System (ADS)

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  16. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  17. Telemonitoring systems interoperability challenge: an updated review of the applicability of ISO/IEEE 11073 standards for interoperability in telemonitoring.

    PubMed

    Galarraga, M; Serrano, L; Martinez, I; de Toledo, P; Reynolds, Melvin

    2007-01-01

    Advances in Information and Communication Technologies, ICT, are bringing new opportunities and use cases in the field of systems and Personal Health Devices used for the telemonitoring of citizens in Home or Mobile scenarios. At a time of such challenges, this review arises from the need to identify robust technical telemonitoring solutions that are both open and interoperable. These systems demand standardized solutions to be cost effective and to take advantage of standardized operation and interoperability. Thus, the fundamental challenge is to design plug-&-play devices that, either as individual elements or as components, can be incorporated in a simple way into different Telecare systems, perhaps configuring a personal user network. Moreover, there is an increasing market pressure from companies not traditionally involved in medical markets, asking for a standard for Personal Health Devices, which foresee a vast demand for telemonitoring, wellness, Ambient Assisted Living (AAL) and e-health applications. However, the newly emerging situations imply very strict requirements for the protocols involved in the communication. The ISO/IEEE 11073 family of standards is adapting and moving in order to face the challenge and might appear the best positioned international standards to reach this goal. This work presents an updated survey of these standards, trying to track the changes that are being fulfilled, and tries to serve as a starting-point for those who want to familiarize themselves with them. PMID:18003427

  18. Neurocybernetic basis of semantic processes.

    PubMed

    Restian, A

    1984-11-01

    Although semantics cannot be reduced to neurophysiology, it must have however a certain neurophysiologic basis and this paper deals with, that neurophysiologic basis which, in fact, has a neurocybernetic basis. The paper first approaches the relations between information and signification and their part within the nervous system's work. Then, it analyses semantic function discoverying neurocybernetic mechanisms which can be proper not only to the conventional signs but also to the objects and phenomena which in turn can play the sign's part. Finally, semantic levels of the nervous system, beginning with the most elementary level of unity, as letters are, and up to the level of the highest ideas and concepts the brain is working with, are described.

  19. Action semantics modulate action prediction.

    PubMed

    Springer, Anne; Prinz, Wolfgang

    2010-11-01

    Previous studies have demonstrated that action prediction involves an internal action simulation that runs time-locked to the real action. The present study replicates and extends these findings by indicating a real-time simulation process (Graf et al., 2007), which can be differentiated from a similarity-based evaluation of internal action representations. Moreover, results showed that action semantics modulate action prediction accuracy. The semantic effect was specified by the processing of action verbs and concrete nouns (Experiment 1) and, more specifically, by the dynamics described by action verbs (Experiment 2) and the speed described by the verbs (e.g., "to catch" vs. "to grasp" vs. "to stretch"; Experiment 3). These results propose a linkage between action simulation and action semantics as two yet unrelated domains, a view that coincides with a recent notion of a close link between motor processes and the understanding of action language.

  20. The semantics of biological forms.

    PubMed

    Albertazzi, Liliana; Canal, Luisa; Dadam, James; Micciolo, Rocco

    2014-01-01

    This study analyses how certain qualitative perceptual appearances of biological forms are correlated with expressions of natural language. Making use of the Osgood semantic differential, we presented the subjects with 32 drawings of biological forms and a list of 10 pairs of connotative adjectives to be put in correlations with them merely by subjective judgments. The principal components analysis made it possible to group the semantics of forms according to two distinct axes of variability: harmony and dynamicity. Specifically, the nonspiculed, nonholed, and flat forms were perceived as harmonic and static; the rounded ones were harmonic and dynamic. The elongated forms were somewhat disharmonious and somewhat static. The results suggest the existence in the general population of a correspondence between perceptual and semantic processes, and of a nonsymbolic relation between visual forms and their adjectival expressions in natural language.

  1. Ontology Matching with Semantic Verification

    PubMed Central

    Jean-Mary, Yves R.; Shironoshita, E. Patrick; Kabuka, Mansur R.

    2009-01-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies. PMID:20186256

  2. Ontology Matching with Semantic Verification.

    PubMed

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  3. 75 FR 59290 - In the Matter of Certain Liquid Crystal Display Devices and Products Interoperable With the Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-27

    ... COMMISSION In the Matter of Certain Liquid Crystal Display Devices and Products Interoperable With the Same... States after importation of certain liquid crystal display devices and products interoperable with the... after importation of certain liquid crystal display devices and products interoperable with the...

  4. Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)

    NASA Astrophysics Data System (ADS)

    Benaben, Frederick; Mu, Wenxin; Boissel-Dallier, Nicolas; Barthe-Delanoe, Anne-Marie; Zribi, Sarah; Pingaud, Herve

    2015-08-01

    The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalisation), (ii) deployment of a mediation information system able to computerise the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organisations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).

  5. Abstraction and natural language semantics.

    PubMed Central

    Kayser, Daniel

    2003-01-01

    According to the traditional view, a word prototypically denotes a class of objects sharing similar features, i.e. it results from an abstraction based on the detection of common properties in perceived entities. I explore here another idea: words result from abstraction of common premises in the rules governing our actions. I first argue that taking 'inference', instead of 'reference', as the basic issue in semantics does matter. I then discuss two phenomena that are, in my opinion, particularly difficult to analyse within the scope of traditional semantic theories: systematic polysemy and plurals. I conclude by a discussion of my approach, and by a summary of its main features. PMID:12903662

  6. Bootstrapping to a Semantic Grid

    SciTech Connect

    Schwidder, Jens; Talbott, Tara; Myers, James D.

    2005-02-28

    The Scientific Annotation Middleware (SAM) is a set of components and services that enable researchers, applications, problem solving environments (PSE) and software agents to create metadata and annotations about data objects and document the semantic relationships between them. Developed starting in 2001, SAM allows applications to encode metadata within files or to manage metadata at the level of individual relationships as desired. SAM then provides mechanisms to expose metadata and relation¬ships encoded either way as WebDAV properties. In this paper, we report on work to further map this metadata into RDF and discuss the role of middleware such as SAM in bridging between traditional and semantic grid applications.

  7. Semantic processing in information retrieval.

    PubMed Central

    Rindflesch, T. C.; Aronson, A. R.

    1993-01-01

    Intuition suggests that one way to enhance the information retrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in information retrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS domain model, which we exploit extensively, significantly contributes to the feasibility of this processing. PMID:8130547

  8. Order effects in dynamic semantics.

    PubMed

    Graben, Peter Beim

    2014-01-01

    In their target article, Wang and Busemeyer (2013) discuss question order effects in terms of incompatible projectors on a Hilbert space. In a similar vein, Blutner recently presented an orthoalgebraic query language essentially relying on dynamic update semantics. Here, I shall comment on some interesting analogies between the different variants of dynamic semantics and generalized quantum theory to illustrate other kinds of order effects in human cognition, such as belief revision, the resolution of anaphors, and default reasoning that result from the crucial non-commutativity of mental operations upon the belief state of a cognitive agent.

  9. Metasemantics: On the Limits of Semantic Theory

    ERIC Educational Resources Information Center

    Parent, T.

    2009-01-01

    METASEMANTICS is a wake-up call for semantic theory: It reveals that some semantic questions have no adequate answer. (This is meant to be the "epistemic" point that certain semantic questions cannot be "settled"--not a metaphysical point about whether there is a fact-of-the-matter.) METASEMANTICS thus checks our default "optimism" that any…

  10. Chinese Character Decoding: A Semantic Bias?

    ERIC Educational Resources Information Center

    Williams, Clay; Bever, Thomas

    2010-01-01

    The effects of semantic and phonetic radicals on Chinese character decoding were examined. Our results suggest that semantic and phonetic radicals are each available for access when a corresponding task emphasizes one or the other kind of radical. But in a more neutral lexical recognition task, the semantic radical is more informative. Semantic…

  11. Semantic Weight and Verb Retrieval in Aphasia

    ERIC Educational Resources Information Center

    Barde, Laura H. F.; Schwartz, Myrna F.; Boronat, Consuelo B.

    2006-01-01

    Individuals with agrammatic aphasia may have difficulty with verb production in comparison to nouns. Additionally, they may have greater difficulty producing verbs that have fewer semantic components (i.e., are semantically "light") compared to verbs that have greater semantic weight. A connectionist verb-production model proposed by Gordon and…

  12. Semantic Relatedness for Evaluation of Course Equivalencies

    ERIC Educational Resources Information Center

    Yang, Beibei

    2012-01-01

    Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the…

  13. Test Protocols for Advanced Inverter Interoperability Functions - Appendices

    SciTech Connect

    Johnson, Jay Dean; Gonzalez, Sigifredo; Ralph, Mark E.; Ellis, Abraham; Broderick, Robert Joseph

    2013-11-01

    Distributed energy resources (DER) such as photovoltaic (PV) systems, when deployed in a large scale, are capable of influencing significantly the operation of power systems. Looking to the future, stakeholders are working on standards to make it possible to manage the potentially complex interactions between DER and the power system. In 2009, the Electric Power Research Institute (EPRI), Sandia National Laboratories (SNL) with the U.S. Department of Energy (DOE), and the Solar Electric Power Association (SEPA) initiated a large industry collaborative to identify and standardize definitions for a set of DER grid support functions. While the initial effort concentrated on grid-tied PV inverters and energy storage systems, the concepts have applicability to all DER. A partial product of this on-going effort is a reference definitions document (IEC TR 61850-90-7, Object models for power converters in distributed energy resources (DER) systems) that has become a basis for expansion of related International Electrotechnical Commission (IEC) standards, and is supported by US National Institute of Standards and Technology (NIST) Smart Grid Interoperability Panel (SGIP). Some industry-led organizations advancing communications protocols have also embraced this work. As standards continue to evolve, it is necessary to develop test protocols to independently verify that the inverters are properly executing the advanced functions. Interoperability is assured by establishing common definitions for the functions and a method to test compliance with operational requirements. This document describes test protocols developed by SNL to evaluate the electrical performance and operational capabilities of PV inverters and energy storage, as described in IEC TR 61850-90-7. While many of these functions are not now required by existing grid codes or may not be widely available commercially, the industry is rapidly moving in that direction. Interoperability issues are already apparent as

  14. Test Protocols for Advanced Inverter Interoperability Functions – Main Document

    SciTech Connect

    Johnson, Jay Dean; Gonzalez, Sigifredo; Ralph, Mark E.; Ellis, Abraham; Broderick, Robert Joseph

    2013-11-01

    Distributed energy resources (DER) such as photovoltaic (PV) systems, when deployed in a large scale, are capable of influencing significantly the operation of power systems. Looking to the future, stakeholders are working on standards to make it possible to manage the potentially complex interactions between DER and the power system. In 2009, the Electric Power Research Institute (EPRI), Sandia National Laboratories (SNL) with the U.S. Department of Energy (DOE), and the Solar Electric Power Association (SEPA) initiated a large industry collaborative to identify and standardize definitions for a set of DER grid support functions. While the initial effort concentrated on grid-tied PV inverters and energy storage systems, the concepts have applicability to all DER. A partial product of this on-going effort is a reference definitions document (IEC TR 61850-90-7, Object models for power converters in distributed energy resources (DER) systems) that has become a basis for expansion of related International Electrotechnical Commission (IEC) standards, and is supported by US National Institute of Standards and Technology (NIST) Smart Grid Interoperability Panel (SGIP). Some industry-led organizations advancing communications protocols have also embraced this work. As standards continue to evolve, it is necessary to develop test protocols to independently verify that the inverters are properly executing the advanced functions. Interoperability is assured by establishing common definitions for the functions and a method to test compliance with operational requirements. This document describes test protocols developed by SNL to evaluate the electrical performance and operational capabilities of PV inverters and energy storage, as described in IEC TR 61850-90-7. While many of these functions are not currently required by existing grid codes or may not be widely available commercially, the industry is rapidly moving in that direction. Interoperability issues are already

  15. Extracting semantic lexicons from discharge summaries using machine learning and the C-Value method.

    PubMed

    Jiang, Min; Denny, Josh C; Tang, Buzhou; Cao, Hongxin; Xu, Hua

    2012-01-01

    Semantic lexicons that link words and phrases to specific semantic types such as diseases are valuable assets for clinical natural language processing (NLP) systems. Although terminological terms with predefined semantic types can be generated easily from existing knowledge bases such as the Unified Medical Language Systems (UMLS), they are often limited and do not have good coverage for narrative clinical text. In this study, we developed a method for building semantic lexicons from clinical corpus. It extracts candidate semantic terms using a conditional random field (CRF) classifier and then selects terms using the C-Value algorithm. We applied the method to a corpus containing 10 years of discharge summaries from Vanderbilt University Hospital (VUH) and extracted 44,957 new terms for three semantic groups: Problem, Treatment, and Test. A manual analysis of 200 randomly selected terms not found in the UMLS demonstrated that 59% of them were meaningful new clinical concepts and 25% were lexical variants of exiting concepts in the UMLS. Furthermore, we compared the effectiveness of corpus-derived and UMLS-derived semantic lexicons in the concept extraction task of the 2010 i2b2 clinical NLP challenge. Our results showed that the classifier with corpus-derived semantic lexicons as features achieved a better performance (F-score 82.52%) than that with UMLS-derived semantic lexicons as features (F-score 82.04%). We conclude that such corpus-based methods are effective for generating semantic lexicons, which may improve named entity recognition tasks and may aid in augmenting synonymy within existing terminologies.

  16. Extracting Semantic Lexicons from Discharge Summaries using Machine Learning and the C-Value Method

    PubMed Central

    Jiang, Min; Denny, Josh C.; Tang, Buzhou; Cao, Hongxin; Xu, Hua

    2012-01-01

    Semantic lexicons that link words and phrases to specific semantic types such as diseases are valuable assets for clinical natural language processing (NLP) systems. Although terminological terms with predefined semantic types can be generated easily from existing knowledge bases such as the Unified Medical Language Systems (UMLS), they are often limited and do not have good coverage for narrative clinical text. In this study, we developed a method for building semantic lexicons from clinical corpus. It extracts candidate semantic terms using a conditional random field (CRF) classifier and then selects terms using the C-Value algorithm. We applied the method to a corpus containing 10 years of discharge summaries from Vanderbilt University Hospital (VUH) and extracted 44,957 new terms for three semantic groups: Problem, Treatment, and Test. A manual analysis of 200 randomly selected terms not found in the UMLS demonstrated that 59% of them were meaningful new clinical concepts and 25% were lexical variants of exiting concepts in the UMLS. Furthermore, we compared the effectiveness of corpus-derived and UMLS-derived semantic lexicons in the concept extraction task of the 2010 i2b2 clinical NLP challenge. Our results showed that the classifier with corpus-derived semantic lexicons as features achieved a better performance (F-score 82.52%) than that with UMLS-derived semantic lexicons as features (F-score 82.04%). We conclude that such corpus-based methods are effective for generating semantic lexicons, which may improve named entity recognition tasks and may aid in augmenting synonymy within existing terminologies. PMID:23304311

  17. Architecture and tools for open, interoperable and portable EHRs.

    PubMed

    Blobel, Bernd

    2003-01-01

    Electronic Health Record (EHR) systems provide the kernel application of health information systems and health networks which should be independent of complexity, localisation constraints, platforms, protocols, etc. Based on shared care information systems' requirements for high level interoperability, a generic component architecture has been introduced. For implementing, running and maintaining acceptable and useable health information systems components, all views of the ISO Reference Model--Open Distributed Processing have to be considered. Following the Model Driven Architecture paradigm, a reference model as well as concept-representing domain models both independent of platforms must be specified, which are combined and harmonised as well as automatically transferred into platform-specific models using appropriate tools. PMID:15537227

  18. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  19. Web services for distributed and interoperable hydro-information systems

    NASA Astrophysics Data System (ADS)

    Horak, J.; Orlik, A.; Stromsky, J.

    2007-06-01

    Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on output flexibility (e.g. multilingualism), where SOAP web services are mainly used for communication. The remote components are represented above all by distant data and mapping services (e.g. eteorological predictions), modelling and analytical systems (currently HEC-HMS, Modflow and additional utilities), which support decision making in water management.

  20. Web services for distributed and interoperable hydro-information systems

    NASA Astrophysics Data System (ADS)

    Horak, J.; Orlik, A.; Stromsky, J.

    2008-03-01

    Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on the output flexibility (e.g. multilingualism), where SOAP web services are mainly used for communication. The remote components are represented above all by remote data and mapping services (e.g. meteorological predictions), modelling and analytical systems (currently HEC-HMS, MODFLOW and additional utilities), which support decision making in water management.