Sample records for ontological representation based

  1. Effects of an ontology display with history representation on organizational memory information systems.

    PubMed

    Hwang, Wonil; Salvendy, Gavriel

    2005-06-10

    Ontologies, as a possible element of organizational memory information systems, appear to support organizational learning. Ontology tools can be used to share knowledge among the members of an organization. However, current ontology-viewing user interfaces of ontology tools do not fully support organizational learning, because most of them lack proper history representation in their display. In this study, a conceptual model was developed that emphasized the role of ontology in the organizational learning cycle and explored the integration of history representation in the ontology display. Based on the experimental results from a split-plot design with 30 participants, two conclusions were derived: first, appropriately selected history representations in the ontology display help users to identify changes in the ontologies; and second, compatibility between types of ontology display and history representation is more important than ontology display and history representation in themselves.

  2. Towards improving phenotype representation in OWL

    PubMed Central

    2012-01-01

    Background Phenotype ontologies are used in species-specific databases for the annotation of mutagenesis experiments and to characterize human diseases. The Entity-Quality (EQ) formalism is a means to describe complex phenotypes based on one or more affected entities and a quality. EQ-based definitions have been developed for many phenotype ontologies, including the Human and Mammalian Phenotype ontologies. Methods We analyze formalizations of complex phenotype descriptions in the Web Ontology Language (OWL) that are based on the EQ model, identify several representational challenges and analyze potential solutions to address these challenges. Results In particular, we suggest a novel, role-based approach to represent relational qualities such as concentration of iron in spleen, discuss its ontological foundation in the General Formal Ontology (GFO) and evaluate its representation in OWL and the benefits it can bring to the representation of phenotype annotations. Conclusion Our analysis of OWL-based representations of phenotypes can contribute to improving consistency and expressiveness of formal phenotype descriptions. PMID:23046625

  3. Formal ontologies in biomedical knowledge representation.

    PubMed

    Schulz, S; Jansen, L

    2013-01-01

    Medical decision support and other intelligent applications in the life sciences depend on increasing amounts of digital information. Knowledge bases as well as formal ontologies are being used to organize biomedical knowledge and data. However, these two kinds of artefacts are not always clearly distinguished. Whereas the popular RDF(S) standard provides an intuitive triple-based representation, it is semantically weak. Description logics based ontology languages like OWL-DL carry a clear-cut semantics, but they are computationally expensive, and they are often misinterpreted to encode all kinds of statements, including those which are not ontological. We distinguish four kinds of statements needed to comprehensively represent domain knowledge: universal statements, terminological statements, statements about particulars and contingent statements. We argue that the task of formal ontologies is solely to represent universal statements, while the non-ontological kinds of statements can nevertheless be connected with ontological representations. To illustrate these four types of representations, we use a running example from parasitology. We finally formulate recommendations for semantically adequate ontologies that can efficiently be used as a stable framework for more context-dependent biomedical knowledge representation and reasoning applications like clinical decision support systems.

  4. Ontology-supported research on vaccine efficacy, safety and integrative biological networks.

    PubMed

    He, Yongqun

    2014-07-01

    While vaccine efficacy and safety research has dramatically progressed with the methods of in silico prediction and data mining, many challenges still exist. A formal ontology is a human- and computer-interpretable set of terms and relations that represent entities in a specific domain and how these terms relate to each other. Several community-based ontologies (including Vaccine Ontology, Ontology of Adverse Events and Ontology of Vaccine Adverse Events) have been developed to support vaccine and adverse event representation, classification, data integration, literature mining of host-vaccine interaction networks, and analysis of vaccine adverse events. The author further proposes minimal vaccine information standards and their ontology representations, ontology-based linked open vaccine data and meta-analysis, an integrative One Network ('OneNet') Theory of Life, and ontology-based approaches to study and apply the OneNet theory. In the Big Data era, these proposed strategies provide a novel framework for advanced data integration and analysis of fundamental biological networks including vaccine immune mechanisms.

  5. Ontology-supported Research on Vaccine Efficacy, Safety, and Integrative Biological Networks

    PubMed Central

    He, Yongqun

    2016-01-01

    Summary While vaccine efficacy and safety research has dramatically progressed with the methods of in silico prediction and data mining, many challenges still exist. A formal ontology is a human- and computer-interpretable set of terms and relations that represent entities in a specific domain and how these terms relate to each other. Several community-based ontologies (including the Vaccine Ontology, Ontology of Adverse Events, and Ontology of Vaccine Adverse Events) have been developed to support vaccine and adverse event representation, classification, data integration, literature mining of host-vaccine interaction networks, and analysis of vaccine adverse events. The author further proposes minimal vaccine information standards and their ontology representations, ontology-based linked open vaccine data and meta-analysis, an integrative One Network (“OneNet”) Theory of Life, and ontology-based approaches to study and apply the OneNet theory. In the Big Data era, these proposed strategies provide a novel framework for advanced data integration and analysis of fundamental biological networks including vaccine immune mechanisms. PMID:24909153

  6. OntoStudyEdit: a new approach for ontology-based representation and management of metadata in clinical and epidemiological research.

    PubMed

    Uciteli, Alexandr; Herre, Heinrich

    2015-01-01

    The specification of metadata in clinical and epidemiological study projects absorbs significant expense. The validity and quality of the collected data depend heavily on the precise and semantical correct representation of their metadata. In various research organizations, which are planning and coordinating studies, the required metadata are specified differently, depending on many conditions, e.g., on the used study management software. The latter does not always meet the needs of a particular research organization, e.g., with respect to the relevant metadata attributes and structuring possibilities. The objective of the research, set forth in this paper, is the development of a new approach for ontology-based representation and management of metadata. The basic features of this approach are demonstrated by the software tool OntoStudyEdit (OSE). The OSE is designed and developed according to the three ontology method. This method for developing software is based on the interactions of three different kinds of ontologies: a task ontology, a domain ontology and a top-level ontology. The OSE can be easily adapted to different requirements, and it supports an ontologically founded representation and efficient management of metadata. The metadata specifications can by imported from various sources; they can be edited with the OSE, and they can be exported in/to several formats, which are used, e.g., by different study management software. Advantages of this approach are the adaptability of the OSE by integrating suitable domain ontologies, the ontological specification of mappings between the import/export formats and the DO, the specification of the study metadata in a uniform manner and its reuse in different research projects, and an intuitive data entry for non-expert users.

  7. Computational neuroanatomy: ontology-based representation of neural components and connectivity.

    PubMed

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-02-05

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

  8. Ion Channel ElectroPhysiology Ontology (ICEPO) - a case study of text mining assisted ontology development.

    PubMed

    Elayavilli, Ravikumar Komandur; Liu, Hongfang

    2016-01-01

    Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.

  9. An ontological case base engineering methodology for diabetes management.

    PubMed

    El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema

    2014-08-01

    Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.

  10. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  11. Computational neuroanatomy: ontology-based representation of neural components and connectivity

    PubMed Central

    Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron

    2009-01-01

    Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191

  12. Using Ontologies to Formalize Services Specifications in Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Breitman, Karin Koogan; Filho, Aluizio Haendchen; Haeusler, Edward Hermann

    2004-01-01

    One key issue in multi-agent systems (MAS) is their ability to interact and exchange information autonomously across applications. To secure agent interoperability, designers must rely on a communication protocol that allows software agents to exchange meaningful information. In this paper we propose using ontologies as such communication protocol. Ontologies capture the semantics of the operations and services provided by agents, allowing interoperability and information exchange in a MAS. Ontologies are a formal, machine processable, representation that allows to capture the semantics of a domain and, to derive meaningful information by way of logical inference. In our proposal we use a formal knowledge representation language (OWL) that translates into Description Logics (a subset of first order logic), thus eliminating ambiguities and providing a solid base for machine based inference. The main contribution of this approach is to make the requirements explicit, centralize the specification in a single document (the ontology itself), at the same that it provides a formal, unambiguous representation that can be processed by automated inference machines.

  13. The eXtensible ontology development (XOD) principles and tool implementation to support ontology interoperability.

    PubMed

    He, Yongqun; Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; Overton, James A; Ong, Edison

    2018-01-12

    Ontologies are critical to data/metadata and knowledge standardization, sharing, and analysis. With hundreds of biological and biomedical ontologies developed, it has become critical to ensure ontology interoperability and the usage of interoperable ontologies for standardized data representation and integration. The suite of web-based Ontoanimal tools (e.g., Ontofox, Ontorat, and Ontobee) support different aspects of extensible ontology development. By summarizing the common features of Ontoanimal and other similar tools, we identified and proposed an "eXtensible Ontology Development" (XOD) strategy and its associated four principles. These XOD principles reuse existing terms and semantic relations from reliable ontologies, develop and apply well-established ontology design patterns (ODPs), and involve community efforts to support new ontology development, promoting standardized and interoperable data and knowledge representation and integration. The adoption of the XOD strategy, together with robust XOD tool development, will greatly support ontology interoperability and robust ontology applications to support data to be Findable, Accessible, Interoperable and Reusable (i.e., FAIR).

  14. TNM-O: ontology support for staging of malignant tumours.

    PubMed

    Boeker, Martin; França, Fábio; Bronsert, Peter; Schulz, Stefan

    2016-11-14

    Objectives of this work are to (1) present an ontological framework for the TNM classification system, (2) exemplify this framework by an ontology for colon and rectum tumours, and (3) evaluate this ontology by assigning TNM classes to real world pathology data. The TNM ontology uses the Foundational Model of Anatomy for anatomical entities and BioTopLite 2 as a domain top-level ontology. General rules for the TNM classification system and the specific TNM classification for colorectal tumours were axiomatised in description logic. Case-based information was collected from tumour documentation practice in the Comprehensive Cancer Centre of a large university hospital. Based on the ontology, a module was developed that classifies pathology data. TNM was represented as an information artefact, which consists of single representational units. Corresponding to every representational unit, tumours and tumour aggregates were defined. Tumour aggregates consist of the primary tumour and, if existing, of infiltrated regional lymph nodes and distant metastases. TNM codes depend on the location and certain qualities of the primary tumour (T), the infiltrated regional lymph nodes (N) and the existence of distant metastases (M). Tumour data from clinical and pathological documentation were successfully classified with the ontology. A first version of the TNM Ontology represents the TNM system for the description of the anatomical extent of malignant tumours. The present work demonstrates its representational power and completeness as well as its applicability for classification of instance data.

  15. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  16. Web Ontologies to Categorialy Structure Reality: Representations of Human Emotional, Cognitive, and Motivational Processes

    PubMed Central

    López-Gil, Juan-Miguel; Gil, Rosa; García, Roberto

    2016-01-01

    This work presents a Web ontology for modeling and representation of the emotional, cognitive and motivational state of online learners, interacting with university systems for distance or blended education. The ontology is understood as a way to provide the required mechanisms to model reality and associate it to emotional responses, but without committing to a particular way of organizing these emotional responses. Knowledge representation for the contributed ontology is performed by using Web Ontology Language (OWL), a semantic web language designed to represent rich and complex knowledge about things, groups of things, and relations between things. OWL is a computational logic-based language such that computer programs can exploit knowledge expressed in OWL and also facilitates sharing and reusing knowledge using the global infrastructure of the Web. The proposed ontology has been tested in the field of Massive Open Online Courses (MOOCs) to check if it is capable of representing emotions and motivation of the students in this context of use. PMID:27199796

  17. Ontologies for Effective Use of Context in E-Learning Settings

    ERIC Educational Resources Information Center

    Jovanovic, Jelena; Gasevic, Dragan; Knight, Colin; Richards, Griff

    2007-01-01

    This paper presents an ontology-based framework aimed at explicit representation of context-specific metadata derived from the actual usage of learning objects and learning designs. The core part of the proposed framework is a learning object context ontology, that leverages a range of other kinds of learning ontologies (e.g., user modeling…

  18. Towards Ontology as Knowledge Representation for Intellectual Capital Measurement

    NASA Astrophysics Data System (ADS)

    Zadjabbari, B.; Wongthongtham, P.; Dillon, T. S.

    For many years, physical asset indicators were the main evidence of an organization’s successful performance. However, the situation has changed after information technology revolution in the knowledge-based economy. Since 1980’s business performance has not been limited only to physical assets instead intellectual capital are increasingly playing a major role in business performance. In this paper, we utilize ontology as a tool for knowledge representation in the domain of intellectual capital measurement. The ontology classifies ways of intangible capital measurement.

  19. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  20. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  1. Ontological representation, integration, and analysis of LINCS cell line cells and their cellular responses.

    PubMed

    Ong, Edison; Xie, Jiangan; Ni, Zhaohui; Liu, Qingping; Sarntivijai, Sirarat; Lin, Yu; Cooper, Daniel; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Schürer, Stephan; He, Yongqun

    2017-12-21

    Aiming to understand cellular responses to different perturbations, the NIH Common Fund Library of Integrated Network-based Cellular Signatures (LINCS) program involves many institutes and laboratories working on over a thousand cell lines. The community-based Cell Line Ontology (CLO) is selected as the default ontology for LINCS cell line representation and integration. CLO has consistently represented all 1097 LINCS cell lines and included information extracted from the LINCS Data Portal and ChEMBL. Using MCF 10A cell line cells as an example, we demonstrated how to ontologically model LINCS cellular signatures such as their non-tumorigenic epithelial cell type, three-dimensional growth, latrunculin-A-induced actin depolymerization and apoptosis, and cell line transfection. A CLO subset view of LINCS cell lines, named LINCS-CLOview, was generated to support systematic LINCS cell line analysis and queries. In summary, LINCS cell lines are currently associated with 43 cell types, 131 tissues and organs, and 121 cancer types. The LINCS-CLO view information can be queried using SPARQL scripts. CLO was used to support ontological representation, integration, and analysis of over a thousand LINCS cell line cells and their cellular responses.

  2. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  3. Ontology Design of Influential People Identification Using Centrality

    NASA Astrophysics Data System (ADS)

    Maulana Awangga, Rolly; Yusril, Muhammad; Setyawan, Helmi

    2018-04-01

    Identifying influential people as a node in a graph theory commonly calculated by social network analysis. The social network data has the user as node and edge as relation forming a friend relation graph. This research is conducting different meaning of every nodes relation in the social network. Ontology was perfect match science to describe the social network data as conceptual and domain. Ontology gives essential relationship in a social network more than a current graph. Ontology proposed as a standard for knowledge representation for the semantic web by World Wide Web Consortium. The formal data representation use Resource Description Framework (RDF) and Web Ontology Language (OWL) which is strategic for Open Knowledge-Based website data. Ontology used in the semantic description for a relationship in the social network, it is open to developing semantic based relationship ontology by adding and modifying various and different relationship to have influential people as a conclusion. This research proposes a model using OWL and RDF for influential people identification in the social network. The study use degree centrality, between ness centrality, and closeness centrality measurement for data validation. As a conclusion, influential people identification in Facebook can use proposed Ontology model in the Group, Photos, Photo Tag, Friends, Events and Works data.

  4. The Digital electronic Guideline Library (DeGeL): a hybrid framework for representation and use of clinical guidelines.

    PubMed

    Shahar, Yuval; Young, Ohad; Shalom, Erez; Mayaffit, Alon; Moskovitch, Robert; Hessing, Alon; Galperin, Maya

    2004-01-01

    We propose to present a poster (and potentially also a demonstration of the implemented system) summarizing the current state of our work on a hybrid, multiple-format representation of clinical guidelines that facilitates conversion of guidelines from free text to a formal representation. We describe a distributed Web-based architecture (DeGeL) and a set of tools using the hybrid representation. The tools enable performing tasks such as guideline specification, semantic markup, search, retrieval, visualization, eligibility determination, runtime application and retrospective quality assessment. The representation includes four parallel formats: Free text (one or more original sources); semistructured text (labeled by the target guideline-ontology semantic labels); semiformal text (which includes some control specification); and a formal, machine-executable representation. The specification, indexing, search, retrieval, and browsing tools are essentially independent of the ontology chosen for guideline representation, but editing the semi-formal and formal formats requires ontology-specific tools, which we have developed in the case of the Asbru guideline-specification language. The four formats support increasingly sophisticated computational tasks. The hybrid guidelines are stored in a Web-based library. All tools, such as for runtime guideline application or retrospective quality assessment, are designed to operate on all representations. We demonstrate the hybrid framework by providing examples from the semantic markup and search tools.

  5. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  6. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  7. DOLCE ROCKS: Integrating Foundational and Geoscience Ontologies--Preliminary Results for the Integration of Concepts from DOLCE, GeoSciML, and SWEET

    NASA Astrophysics Data System (ADS)

    Brodaric, B.; Probst, F.

    2007-12-01

    Ontologies are being developed bottom-up in many geoscience domains to aid semantic-enabled computing. The contents of these ontologies are typically partitioned along domain boundaries, such as geology, geophsyics, hydrology, or are developed for specific data sets or processing needs. At the same time, very general foundational ontologies are being independently developed top-down to help facilitate integration of knowledge across such domains, and to provide homogeneity to the organization of knowledge within the domains. In this work we investigate the suitability of integrating the DOLCE foundational ontology with concepts from two prominent geoscience knowledge representations, GeoSciML and SWEET, to investigate the alignment of the concepts found within the foundational and domain representations. The geoscience concepts are partially mapped to each other and to those in the foundational ontology, via the subclass and other relations, resulting in an integrated OWL-based ontology called DOLCE ROCKS. These preliminary results demonstrate variable alignment between the foundational and domain concepts, and also between the domain concepts. Further work is required to ascertain the impact of this integrated ontology approach on broader geoscience ontology design, on the unification of domain ontologies, as well as their use within semantic-enabled geoscience applications.

  8. Ontology-Driven Business Modelling: Improving the Conceptual Representation of the REA Ontology

    NASA Astrophysics Data System (ADS)

    Gailly, Frederik; Poels, Geert

    Business modelling research is increasingly interested in exploring how domain ontologies can be used as reference models for business models. The Resource Event Agent (REA) ontology is a primary candidate for ontology-driven modelling of business processes because the REA point of view on business reality is close to the conceptual modelling perspective on business models. In this paper Ontology Engineering principles are employed to reengineer REA in order to make it more suitable for ontology-driven business modelling. The new conceptual representation of REA that we propose uses a single representation formalism, includes a more complete domain axiomatizat-ion (containing definitions of concepts, concept relations and ontological axioms), and is proposed as a generic model that can be instantiated to create valid business models. The effects of these proposed improvements on REA-driven business modelling are demonstrated using a business modelling example.

  9. Dynamic knowledge representation using agent-based modeling: ontology instantiation and verification of conceptual models.

    PubMed

    An, Gary

    2009-01-01

    The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.

  10. Probabilistic Ontology Architecture for a Terrorist Identification Decision Support System

    DTIC Science & Technology

    2014-06-01

    in real-world problems requires probabilistic ontologies, which integrate the inferential reasoning power of probabilistic representations with the... inferential reasoning power of probabilistic representations with the first-order expressivity of ontologies. The Reference Architecture for...ontology, terrorism, inferential reasoning, architecture I. INTRODUCTION A. Background Whether by nature or design, the personas of terrorists are

  11. Knowledge Representation and Management. From Ontology to Annotation. Findings from the Yearbook 2015 Section on Knowledge Representation and Management.

    PubMed

    Charlet, J; Darmoni, S J

    2015-08-13

    To summarize the best papers in the field of Knowledge Representation and Management (KRM). A comprehensive review of medical informatics literature was performed to select some of the most interesting papers of KRM published in 2014. Four articles were selected, two focused on annotation and information retrieval using an ontology. The two others focused mainly on ontologies, one dealing with the usage of a temporal ontology in order to analyze the content of narrative document, one describing a methodology for building multilingual ontologies. Semantic models began to show their efficiency, coupled with annotation tools.

  12. Alignment of ICNP® 2.0 ontology and a proposed INCP® Brazilian ontology.

    PubMed

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; Nóbrega, Maria Miriam Lima da

    2014-01-01

    to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology. document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence, their names, and their sibling, parent, and child classes. there were 2,682 concepts present in the ICNP® 2.0 Ontology that were missing in the Brazilian Ontology; 717 concepts present in the Brazilian Ontology were missing in the ICNP® 2.0 Ontology; and there were 215 pairs of matching concepts. it is believed that the correspondences identified in this study might contribute to the interoperability between the representations of nursing practice elements in ICNP®, thus allowing the standardization of nursing records based on this classification system.

  13. Coupling ontology driven semantic representation with multilingual natural language generation for tuning international terminologies.

    PubMed

    Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine

    2007-01-01

    The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.

  14. My Corporis Fabrica Embryo: An ontology-based 3D spatio-temporal modeling of human embryo development.

    PubMed

    Rabattu, Pierre-Yves; Massé, Benoit; Ulliana, Federico; Rousset, Marie-Christine; Rohmer, Damien; Léon, Jean-Claude; Palombi, Olivier

    2015-01-01

    Embryology is a complex morphologic discipline involving a set of entangled mechanisms, sometime difficult to understand and to visualize. Recent computer based techniques ranging from geometrical to physically based modeling are used to assist the visualization and the simulation of virtual humans for numerous domains such as surgical simulation and learning. On the other side, the ontology-based approach applied to knowledge representation is more and more successfully adopted in the life-science domains to formalize biological entities and phenomena, thanks to a declarative approach for expressing and reasoning over symbolic information. 3D models and ontologies are two complementary ways to describe biological entities that remain largely separated. Indeed, while many ontologies providing a unified formalization of anatomy and embryology exist, they remain only descriptive and make the access to anatomical content of complex 3D embryology models and simulations difficult. In this work, we present a novel ontology describing the development of the human embryology deforming 3D models. Beyond describing how organs and structures are composed, our ontology integrates a procedural description of their 3D representations, temporal deformation and relations with respect to their developments. We also created inferences rules to express complex connections between entities. It results in a unified description of both the knowledge of the organs deformation and their 3D representations enabling to visualize dynamically the embryo deformation during the Carnegie stages. Through a simplified ontology, containing representative entities which are linked to spatial position and temporal process information, we illustrate the added-value of such a declarative approach for interactive simulation and visualization of 3D embryos. Combining ontologies and 3D models enables a declarative description of different embryological models that capture the complexity of human developmental anatomy. Visualizing embryos with 3D geometric models and their animated deformations perhaps paves the way towards some kind of hypothesis-driven application. These can also be used to assist the learning process of this complex knowledge. http://www.mycorporisfabrica.org/.

  15. BiNChE: a web tool and library for chemical enrichment analysis based on the ChEBI ontology.

    PubMed

    Moreno, Pablo; Beisken, Stephan; Harsha, Bhavana; Muthukrishnan, Venkatesh; Tudose, Ilinca; Dekker, Adriano; Dornfeldt, Stefanie; Taruttis, Franziska; Grosse, Ivo; Hastings, Janna; Neumann, Steffen; Steinbeck, Christoph

    2015-02-21

    Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis. We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology. BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.

  16. Modeling biochemical pathways in the gene ontology

    DOE PAGES

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.; ...

    2016-09-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes inmore » the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis.« less

  17. Interoperability between biomedical ontologies through relation expansion, upper-level ontologies and automatic reasoning.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V

    2011-01-01

    Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.

  18. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    PubMed

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Semantic enrichment of clinical models towards semantic interoperability. The heart failure summary use case.

    PubMed

    Martínez-Costa, Catalina; Cornet, Ronald; Karlsson, Daniel; Schulz, Stefan; Kalra, Dipak

    2015-05-01

    To improve semantic interoperability of electronic health records (EHRs) by ontology-based mediation across syntactically heterogeneous representations of the same or similar clinical information. Our approach is based on a semantic layer that consists of: (1) a set of ontologies supported by (2) a set of semantic patterns. The first aspect of the semantic layer helps standardize the clinical information modeling task and the second shields modelers from the complexity of ontology modeling. We applied this approach to heterogeneous representations of an excerpt of a heart failure summary. Using a set of finite top-level patterns to derive semantic patterns, we demonstrate that those patterns, or compositions thereof, can be used to represent information from clinical models. Homogeneous querying of the same or similar information, when represented according to heterogeneous clinical models, is feasible. Our approach focuses on the meaning embedded in EHRs, regardless of their structure. This complex task requires a clear ontological commitment (ie, agreement to consistently use the shared vocabulary within some context), together with formalization rules. These requirements are supported by semantic patterns. Other potential uses of this approach, such as clinical models validation, require further investigation. We show how an ontology-based representation of a clinical summary, guided by semantic patterns, allows homogeneous querying of heterogeneous information structures. Whether there are a finite number of top-level patterns is an open question. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. [Analysis of health terminologies for use as ontologies in healthcare information systems].

    PubMed

    Romá-Ferri, Maria Teresa; Palomar, Manuel

    2008-01-01

    Ontologies are a resource that allow the concept of meaning to be represented informatically, thus avoiding the limitations imposed by standardized terms. The objective of this study was to establish the extent to which terminologies could be used for the design of ontologies, which could be serve as an aid to resolve problems such as semantic interoperability and knowledge reusability in healthcare information systems. To determine the extent to which terminologies could be used as ontologies, six of the most important terminologies in clinical, epidemiologic, documentation and administrative-economic contexts were analyzed. The following characteristics were verified: conceptual coverage, hierarchical structure, conceptual granularity of the categories, conceptual relations, and the language used for conceptual representation. MeSH, DeCS and UMLS ontologies were considered lightweight. The main differences among these ontologies concern conceptual specification, the types of relation and the restrictions among the associated concepts. SNOMED and GALEN ontologies have declaratory formalism, based on logical descriptions. These ontologies include explicit qualities and show greater restrictions among associated concepts and rule combinations and were consequently considered as heavyweight. Analysis of the declared representation of the terminologies shows the extent to which they could be reused as ontologies. Their degree of usability depends on whether the aim is for healthcare information systems to solve problems of semantic interoperability (lightweight ontologies) or to reuse the systems' knowledge as an aid to decision making (heavyweight ontologies) and for non-structured information retrieval, extraction, and classification.

  1. Semantic technologies in a decision support system

    NASA Astrophysics Data System (ADS)

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Bǎdicǎ, C.; Ivanovic, M.; Lirkov, I.

    2015-10-01

    The aim of our work is to design a decision support system based on ontological representation of domain(s) and semantic technologies. Specifically, we consider the case when Grid / Cloud user describes his/her requirements regarding a "resource" as a class expression from an ontology, while the instances of (the same) ontology represent available resources. The goal is to help the user to find the best option with respect to his/her requirements, while remembering that user's knowledge may be "limited." In this context, we discuss multiple approaches based on semantic data processing, which involve different "forms" of user interaction with the system. Specifically, we consider: (a) ontological matchmaking based on SPARQL queries and class expression, (b) graph-based semantic closeness of instances representing user requirements (constructed from the class expression) and available resources, and (c) multicriterial analysis based on the AHP method, which utilizes expert domain knowledge (also ontologically represented).

  2. Towards symbiosis in knowledge representation and natural language processing for structuring clinical practice guidelines.

    PubMed

    Weng, Chunhua; Payne, Philip R O; Velez, Mark; Johnson, Stephen B; Bakken, Suzanne

    2014-01-01

    The successful adoption by clinicians of evidence-based clinical practice guidelines (CPGs) contained in clinical information systems requires efficient translation of free-text guidelines into computable formats. Natural language processing (NLP) has the potential to improve the efficiency of such translation. However, it is laborious to develop NLP to structure free-text CPGs using existing formal knowledge representations (KR). In response to this challenge, this vision paper discusses the value and feasibility of supporting symbiosis in text-based knowledge acquisition (KA) and KR. We compare two ontologies: (1) an ontology manually created by domain experts for CPG eligibility criteria and (2) an upper-level ontology derived from a semantic pattern-based approach for automatic KA from CPG eligibility criteria text. Then we discuss the strengths and limitations of interweaving KA and NLP for KR purposes and important considerations for achieving the symbiosis of KR and NLP for structuring CPGs to achieve evidence-based clinical practice.

  3. LapOntoSPM: an ontology for laparoscopic surgeries and its application to surgical phase recognition.

    PubMed

    Katić, Darko; Julliard, Chantal; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie; Jannin, Pierre; Gibaud, Bernard

    2015-09-01

    The rise of intraoperative information threatens to outpace our abilities to process it. Context-aware systems, filtering information to automatically adapt to the current needs of the surgeon, are necessary to fully profit from computerized surgery. To attain context awareness, representation of medical knowledge is crucial. However, most existing systems do not represent knowledge in a reusable way, hindering also reuse of data. Our purpose is therefore to make our computational models of medical knowledge sharable, extensible and interoperational with established knowledge representations in the form of the LapOntoSPM ontology. To show its usefulness, we apply it to situation interpretation, i.e., the recognition of surgical phases based on surgical activities. Considering best practices in ontology engineering and building on our ontology for laparoscopy, we formalized the workflow of laparoscopic adrenalectomies, cholecystectomies and pancreatic resections in the framework of OntoSPM, a new standard for surgical process models. Furthermore, we provide a rule-based situation interpretation algorithm based on SQWRL to recognize surgical phases using the ontology. The system was evaluated on ground-truth data from 19 manually annotated surgeries. The aim was to show that the phase recognition capabilities are equal to a specialized solution. The recognition rates of the new system were equal to the specialized one. However, the time needed to interpret a situation rose from 0.5 to 1.8 s on average which is still viable for practical application. We successfully integrated medical knowledge for laparoscopic surgeries into OntoSPM, facilitating knowledge and data sharing. This is especially important for reproducibility of results and unbiased comparison of recognition algorithms. The associated recognition algorithm was adapted to the new representation without any loss of classification power. The work is an important step to standardized knowledge and data representation in the field on context awareness and thus toward unified benchmark data sets.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes inmore » the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis.« less

  5. Representation of Time-Relevant Common Data Elements in the Cancer Data Standards Repository: Statistical Evaluation of an Ontological Approach

    PubMed Central

    Chen, Henry W; Du, Jingcheng; Song, Hsing-Yi; Liu, Xiangyu; Jiang, Guoqian

    2018-01-01

    Background Today, there is an increasing need to centralize and standardize electronic health data within clinical research as the volume of data continues to balloon. Domain-specific common data elements (CDEs) are emerging as a standard approach to clinical research data capturing and reporting. Recent efforts to standardize clinical study CDEs have been of great benefit in facilitating data integration and data sharing. The importance of the temporal dimension of clinical research studies has been well recognized; however, very few studies have focused on the formal representation of temporal constraints and temporal relationships within clinical research data in the biomedical research community. In particular, temporal information can be extremely powerful to enable high-quality cancer research. Objective The objective of the study was to develop and evaluate an ontological approach to represent the temporal aspects of cancer study CDEs. Methods We used CDEs recorded in the National Cancer Institute (NCI) Cancer Data Standards Repository (caDSR) and created a CDE parser to extract time-relevant CDEs from the caDSR. Using the Web Ontology Language (OWL)–based Time Event Ontology (TEO), we manually derived representative patterns to semantically model the temporal components of the CDEs using an observing set of randomly selected time-related CDEs (n=600) to create a set of TEO ontological representation patterns. In evaluating TEO’s ability to represent the temporal components of the CDEs, this set of representation patterns was tested against two test sets of randomly selected time-related CDEs (n=425). Results It was found that 94.2% (801/850) of the CDEs in the test sets could be represented by the TEO representation patterns. Conclusions In conclusion, TEO is a good ontological model for representing the temporal components of the CDEs recorded in caDSR. Our representative model can harness the Semantic Web reasoning and inferencing functionalities and present a means for temporal CDEs to be machine-readable, streamlining meaningful searches. PMID:29472179

  6. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    PubMed

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/

  7. Ontology-based representation and analysis of host-Brucella interactions.

    PubMed

    Lin, Yu; Xiang, Zuoshuang; He, Yongqun

    2015-01-01

    Biomedical ontologies are representations of classes of entities in the biomedical domain and how these classes are related in computer- and human-interpretable formats. Ontologies support data standardization and exchange and provide a basis for computer-assisted automated reasoning. IDOBRU is an ontology in the domain of Brucella and brucellosis. Brucella is a Gram-negative intracellular bacterium that causes brucellosis, the most common zoonotic disease in the world. In this study, IDOBRU is used as a platform to model and analyze how the hosts, especially host macrophages, interact with virulent Brucella strains or live attenuated Brucella vaccine strains. Such a study allows us to better integrate and understand intricate Brucella pathogenesis and host immunity mechanisms. Different levels of host-Brucella interactions based on different host cell types and Brucella strains were first defined ontologically. Three important processes of virulent Brucella interacting with host macrophages were represented: Brucella entry into macrophage, intracellular trafficking, and intracellular replication. Two Brucella pathogenesis mechanisms were ontologically represented: Brucella Type IV secretion system that supports intracellular trafficking and replication, and Brucella erythritol metabolism that participates in Brucella intracellular survival and pathogenesis. The host cell death pathway is critical to the outcome of host-Brucella interactions. For better survival and replication, virulent Brucella prevents macrophage cell death. However, live attenuated B. abortus vaccine strain RB51 induces caspase-2-mediated proinflammatory cell death. Brucella-associated cell death processes are represented in IDOBRU. The gene and protein information of 432 manually annotated Brucella virulence factors were represented using the Ontology of Genes and Genomes (OGG) and Protein Ontology (PRO), respectively. Seven inference rules were defined to capture the knowledge of host-Brucella interactions and implemented in IDOBRU. Current IDOBRU includes 3611 ontology terms. SPARQL queries identified many results that are critical to the host-Brucella interactions. For example, out of 269 protein virulence factors related to macrophage-Brucella interactions, 81 are critical to Brucella intracellular replication inside macrophages. A SPARQL query also identified 11 biological processes important for Brucella virulence. To systematically represent and analyze fundamental host-pathogen interaction mechanisms, we provided for the first time comprehensive ontological modeling of host-pathogen interactions using Brucella as the pathogen model. The methods and ontology representations used in our study are generic and can be broadened to study the interactions between hosts and other pathogens.

  8. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  9. Preface to MOST-ONISW 2009

    NASA Astrophysics Data System (ADS)

    Doerr, Martin; Freitas, Fred; Guizzardi, Giancarlo; Han, Hyoil

    Ontology is a cross-disciplinary field concerned with the study of concepts and theories that can be used for representing shared conceptualizations of specific domains. Ontological Engineering is a discipline in computer and information science concerned with the development of techniques, methods, languages and tools for the systematic construction of concrete artifacts capturing these representations, i.e., models (e.g., domain ontologies) and metamodels (e.g., upper-level ontologies). In recent years, there has been a growing interest in the application of formal ontology and ontological engineering to solve modeling problems in diverse areas in computer science such as software and data engineering, knowledge representation, natural language processing, information science, among many others.

  10. Ontology to relational database transformation for web application development and maintenance

    NASA Astrophysics Data System (ADS)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  11. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability.

    PubMed

    Diehl, Alexander D; Meehan, Terrence F; Bradford, Yvonne M; Brush, Matthew H; Dahdul, Wasila M; Dougall, David S; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E; Vasilevsky, Nicole A; Haendel, Melissa A; Blake, Judith A; Mungall, Christopher J

    2016-07-04

    The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the CL both among developers and within the user community.

  12. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/ PMID:22541598

  13. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    PubMed Central

    Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel

    2012-01-01

    Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.

  14. KaBOB: ontology-based semantic integration of biomedical databases.

    PubMed

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.

  15. tOWL: Integrating Time in OWL

    NASA Astrophysics Data System (ADS)

    Frasincar, Flavius; Milea, Viorel; Kaymak, Uzay

    The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies on the Semantic Web. In this chapter, we present the temporal OWL (tOWL) language: a temporal extension of the OWL DL language. tOWL is based on three layers added on top of OWL DL. The first layer is the Concrete Domains layer, which allows the representation of restrictions using concrete domain binary predicates. The second layer is the Time Representation layer, which adds time points, intervals, and Allen's 13 interval relations. The third layer is the Change Representation layer which supports a perdurantist view on the world, and allows the representation of complex temporal axioms, such as state transitions. A Leveraged Buyout process is used to exemplify the different tOWL constructs and show the tOWL applicability in a business context.

  16. Extraction of Graph Information Based on Image Contents and the Use of Ontology

    ERIC Educational Resources Information Center

    Kanjanawattana, Sarunya; Kimura, Masaomi

    2016-01-01

    A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…

  17. The Role of Floor Control and of Ontology in Argumentative Activities with Discussion-Based Tools

    ERIC Educational Resources Information Center

    Schwarz, Baruch B.; Glassner, Amnon

    2007-01-01

    Argumentative activity has been found beneficial for construction of knowledge and evaluation of information in some conditions. Many theorists in CSCL and some empiricists have suggested that graphical representations may help in this endeavor. In the present study, we examine effects of type of ontology and of synchronicity in students that…

  18. Benchmarking Ontologies: Bigger or Better?

    PubMed Central

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them. PMID:21249231

  19. Ontology patterns for tabular representations of biomedical knowledge on neglected tropical diseases

    PubMed Central

    Santana, Filipe; Schober, Daniel; Medeiros, Zulma; Freitas, Fred; Schulz, Stefan

    2011-01-01

    Motivation: Ontology-like domain knowledge is frequently published in a tabular format embedded in scientific publications. We explore the re-use of such tabular content in the process of building NTDO, an ontology of neglected tropical diseases (NTDs), where the representation of the interdependencies between hosts, pathogens and vectors plays a crucial role. Results: As a proof of concept we analyzed a tabular compilation of knowledge about pathogens, vectors and geographic locations involved in the transmission of NTDs. After a thorough ontological analysis of the domain of interest, we formulated a comprehensive design pattern, rooted in the biomedical domain upper level ontology BioTop. This pattern was implemented in a VBA script which takes cell contents of an Excel spreadsheet and transforms them into OWL-DL. After minor manual post-processing, the correctness and completeness of the ontology was tested using pre-formulated competence questions as description logics (DL) queries. The expected results could be reproduced by the ontology. The proposed approach is recommended for optimizing the acquisition of ontological domain knowledge from tabular representations. Availability and implementation: Domain examples, source code and ontology are freely available on the web at http://www.cin.ufpe.br/~ntdo. Contact: fss3@cin.ufpe.br PMID:21685092

  20. The Fusion Model of Intelligent Transportation Systems Based on the Urban Traffic Ontology

    NASA Astrophysics Data System (ADS)

    Yang, Wang-Dong; Wang, Tao

    On these issues unified representation of urban transport information using urban transport ontology, it defines the statute and the algebraic operations of semantic fusion in ontology level in order to achieve the fusion of urban traffic information in the semantic completeness and consistency. Thus this paper takes advantage of the semantic completeness of the ontology to build urban traffic ontology model with which we resolve the problems as ontology mergence and equivalence verification in semantic fusion of traffic information integration. Information integration in urban transport can increase the function of semantic fusion, and reduce the amount of data integration of urban traffic information as well enhance the efficiency and integrity of traffic information query for the help, through the practical application of intelligent traffic information integration platform of Changde city, the paper has practically proved that the semantic fusion based on ontology increases the effect and efficiency of the urban traffic information integration, reduces the storage quantity, and improve query efficiency and information completeness.

  1. Development of description framework of pharmacodynamics ontology and its application to possible drug-drug interaction reasoning.

    PubMed

    Imai, Takeshi; Hayakawa, Masayo; Ohe, Kazuhiko

    2013-01-01

    Prediction of synergistic or antagonistic effects of drug-drug interaction (DDI) in vivo has been of considerable interest over the years. Formal representation of pharmacological knowledge such as ontology is indispensable for machine reasoning of possible DDIs. However, current pharmacology knowledge bases are not sufficient to provide formal representation of DDI information. With this background, this paper presents: (1) a description framework of pharmacodynamics ontology; and (2) a methodology to utilize pharmacodynamics ontology to detect different types of possible DDI pairs with supporting information such as underlying pharmacodynamics mechanisms. We also evaluated our methodology in the field of drugs related to noradrenaline signal transduction process and 11 different types of possible DDI pairs were detected. The main features of our methodology are the explanation capability of the reason for possible DDIs and the distinguishability of different types of DDIs. These features will not only be useful for providing supporting information to prescribers, but also for large-scale monitoring of drug safety.

  2. Managing changes in distributed biomedical ontologies using hierarchical distributed graph transformation.

    PubMed

    Shaban-Nejad, Arash; Haarslev, Volker

    2015-01-01

    The issue of ontology evolution and change management is inadequately addressed by available tools and algorithms, mostly due to the lack of suitable knowledge representation formalisms to deal with temporal abstract notations and the overreliance on human factors. Also most of the current approaches have been focused on changes within the internal structure of ontologies and interactions with other existing ontologies have been widely neglected. In our research, after revealing and classifying some of the common alterations in a number of popular biomedical ontologies, we present a novel agent-based framework, Represent, Legitimate and Reproduce (RLR), to semi-automatically manage the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology, with minimal human intervention. RLR assists and guides ontology engineers through the change management process in general and aids in tracking and representing the changes, particularly through the use of category theory and hierarchical graph transformation.

  3. Ontology method for 3DGIS modeling

    NASA Astrophysics Data System (ADS)

    Sun, Min; Chen, Jun

    2006-10-01

    Data modeling is a baffling problem in 3DGIS, no satisfied solution has been provided until today, reason come from various sides. In this paper, a new solution named "Ontology method" is proposed. GIS traditional modeling method mainly focus on geometrical modeling, i.e., try to abstract geometry primitives for objects representation, this kind modeling method show it's awkward in 3DGIS modeling process. Ontology method begins modeling from establishing a set of ontology with different levels. The essential difference of this method is to swap the position of 'spatial data' and 'attribute data' in 2DGIS modeling process for 3DGIS modeling. Ontology method has great advantages in many sides, a system based on ontology is easy to realize interoperation for communication and data mining for knowledge deduction, in addition has many other advantages.

  4. Building a developmental toxicity ontology.

    PubMed

    Baker, Nancy; Boobis, Alan; Burgoon, Lyle; Carney, Edward; Currie, Richard; Fritsche, Ellen; Knudsen, Thomas; Laffont, Madeleine; Piersma, Aldert H; Poole, Alan; Schneider, Steffen; Daston, George

    2018-04-03

    As more information is generated about modes of action for developmental toxicity and more data are generated using high-throughput and high-content technologies, it is becoming necessary to organize that information. This report discussed the need for a systematic representation of knowledge about developmental toxicity (i.e., an ontology) and proposes a method to build one based on knowledge of developmental biology and mode of action/ adverse outcome pathways in developmental toxicity. This report is the result of a consensus working group developing a plan to create an ontology for developmental toxicity that spans multiple levels of biological organization. This report provide a description of some of the challenges in building a developmental toxicity ontology and outlines a proposed methodology to meet those challenges. As the ontology is built on currently available web-based resources, a review of these resources is provided. Case studies on one of the most well-understood morphogens and developmental toxicants, retinoic acid, are presented as examples of how such an ontology might be developed. This report outlines an approach to construct a developmental toxicity ontology. Such an ontology will facilitate computer-based prediction of substances likely to induce human developmental toxicity. © 2018 Wiley Periodicals, Inc.

  5. MENTOR: an enabler for interoperable intelligent systems

    NASA Astrophysics Data System (ADS)

    Sarraipa, João; Jardim-Goncalves, Ricardo; Steiger-Garcao, Adolfo

    2010-07-01

    A community with knowledge organisation based on ontologies will enable an increase in the computational intelligence of its information systems. However, due to the worldwide diversity of communities, a high number of knowledge representation elements, which are not semantically coincident, have appeared representing the same segment of reality, becoming a barrier to business communications. Even if a domain community uses the same kind of technologies in its information systems, such as ontologies, it doesn't solve its semantics differences. In order to solve this interoperability problem, a solution is to use a reference ontology as an intermediary in the communications between the community enterprises and the outside, while allowing the enterprises to keep their own ontology and semantics unchanged internally. This work proposes MENTOR, a methodology to support the development of a common reference ontology for a group of organisations sharing the same business domain. This methodology is based on the mediator ontology (MO) concept, which assists the semantic transformations among each enterprise's ontology and the referential one. The MO enables each organisation to keep its own terminology, glossary and ontological structures, while providing seamless communication and interaction with the others.

  6. A Concept Hierarchy Based Ontology Mapping Approach

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Weiru; Bell, David

    Ontology mapping is one of the most important tasks for ontology interoperability and its main aim is to find semantic relationships between entities (i.e. concept, attribute, and relation) of two ontologies. However, most of the current methods only consider one to one (1:1) mappings. In this paper we propose a new approach (CHM: Concept Hierarchy based Mapping approach) which can find simple (1:1) mappings and complex (m:1 or 1:m) mappings simultaneously. First, we propose a new method to represent the concept names of entities. This method is based on the hierarchical structure of an ontology such that each concept name of entity in the ontology is included in a set. The parent-child relationship in the hierarchical structure of an ontology is then extended as a set-inclusion relationship between the sets for the parent and the child. Second, we compute the similarities between entities based on the new representation of entities in ontologies. Third, after generating the mapping candidates, we select the best mapping result for each source entity. We design a new algorithm based on the Apriori algorithm for selecting the mapping results. Finally, we obtain simple (1:1) and complex (m:1 or 1:m) mappings. Our experimental results and comparisons with related work indicate that utilizing this method in dealing with ontology mapping is a promising way to improve the overall mapping results.

  7. Functional Abstraction as a Method to Discover Knowledge in Gene Ontologies

    PubMed Central

    Ultsch, Alfred; Lötsch, Jörn

    2014-01-01

    Computational analyses of functions of gene sets obtained in microarray analyses or by topical database searches are increasingly important in biology. To understand their functions, the sets are usually mapped to Gene Ontology knowledge bases by means of over-representation analysis (ORA). Its result represents the specific knowledge of the functionality of the gene set. However, the specific ontology typically consists of many terms and relationships, hindering the understanding of the ‘main story’. We developed a methodology to identify a comprehensibly small number of GO terms as “headlines” of the specific ontology allowing to understand all central aspects of the roles of the involved genes. The Functional Abstraction method finds a set of headlines that is specific enough to cover all details of a specific ontology and is abstract enough for human comprehension. This method exceeds the classical approaches at ORA abstraction and by focusing on information rather than decorrelation of GO terms, it directly targets human comprehension. Functional abstraction provides, with a maximum of certainty, information value, coverage and conciseness, a representation of the biological functions in a gene set plays a role. This is the necessary means to interpret complex Gene Ontology results thus strengthening the role of functional genomics in biomarker and drug discovery. PMID:24587272

  8. Knowledge Representation and Management, It's Time to Integrate!

    PubMed

    Dhombres, F; Charlet, J

    2017-08-01

    Objectives: To select, present, and summarize the best papers published in 2016 in the field of Knowledge Representation and Management (KRM). Methods: A comprehensive and standardized review of the medical informatics literature was performed based on a PubMed query. Results: Among the 1,421 retrieved papers, the review process resulted in the selection of four best papers focused on the integration of heterogeneous data via the development and the alignment of terminological resources. In the first article, the authors provide a curated and standardized version of the publicly available US FDA Adverse Event Reporting System. Such a resource will improve the quality of the underlying data, and enable standardized analyses using common vocabularies. The second article describes a project developed in order to facilitate heterogeneous data integration in the i2b2 framework. The originality is to allow users integrate the data described in different terminologies and to build a new repository, with a unique model able to support the representation of the various data. The third paper is dedicated to model the association between multiple phenotypic traits described within the Human Phenotype Ontology (HPO) and the corresponding genotype in the specific context of rare diseases (rare variants). Finally, the fourth paper presents solutions to annotation-ontology mapping in genome-scale data. Of particular interest in this work is the Experimental Factor Ontology (EFO) and its generic association model, the Ontology of Biomedical AssociatioN (OBAN). Conclusion: Ontologies have started to show their efficiency to integrate medical data for various tasks in medical informatics: electronic health records data management, clinical research, and knowledge-based systems development. Georg Thieme Verlag KG Stuttgart.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diehl, Alexander D.; Meehan, Terrence F.; Bradford, Yvonne M.

    Background: The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Construction and content: Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologiesmore » in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. Utility and discussion: The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. Conclusions: The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the CL both among developers and within the user community.« less

  10. Integrating reasoning and clinical archetypes using OWL ontologies and SWRL rules.

    PubMed

    Lezcano, Leonardo; Sicilia, Miguel-Angel; Rodríguez-Solano, Carlos

    2011-04-01

    Semantic interoperability is essential to facilitate the computerized support for alerts, workflow management and evidence-based healthcare across heterogeneous electronic health record (EHR) systems. Clinical archetypes, which are formal definitions of specific clinical concepts defined as specializations of a generic reference (information) model, provide a mechanism to express data structures in a shared and interoperable way. However, currently available archetype languages do not provide direct support for mapping to formal ontologies and then exploiting reasoning on clinical knowledge, which are key ingredients of full semantic interoperability, as stated in the SemanticHEALTH report [1]. This paper reports on an approach to translate definitions expressed in the openEHR Archetype Definition Language (ADL) to a formal representation expressed using the Ontology Web Language (OWL). The formal representations are then integrated with rules expressed with Semantic Web Rule Language (SWRL) expressions, providing an approach to apply the SWRL rules to concrete instances of clinical data. Sharing the knowledge expressed in the form of rules is consistent with the philosophy of open sharing, encouraged by archetypes. Our approach also allows the reuse of formal knowledge, expressed through ontologies, and extends reuse to propositions of declarative knowledge, such as those encoded in clinical guidelines. This paper describes the ADL-to-OWL translation approach, describes the techniques to map archetypes to formal ontologies, and demonstrates how rules can be applied to the resulting representation. We provide examples taken from a patient safety alerting system to illustrate our approach. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability

    DOE PAGES

    Diehl, Alexander D.; Meehan, Terrence F.; Bradford, Yvonne M.; ...

    2016-07-04

    Background: The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Construction and content: Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologiesmore » in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. Utility and discussion: The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. Conclusions: The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the CL both among developers and within the user community.« less

  12. War of ontology worlds: mathematics, computer code, or Esperanto?

    PubMed

    Rzhetsky, Andrey; Evans, James A

    2011-09-01

    The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.

  13. TGF-beta signaling proteins and the Protein Ontology.

    PubMed

    Arighi, Cecilia N; Liu, Hongfang; Natale, Darren A; Barker, Winona C; Drabkin, Harold; Blake, Judith A; Smith, Barry; Wu, Cathy H

    2009-05-06

    The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or post-translational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO is manually curated on the basis of PrePRO, an automatically generated file with content derived from standard protein data sources. Manual curation ensures that the treatment of the protein classes and the internal and external relationships conform to the PRO framework. The current release of PRO is based upon experimental data from mouse and human proteins wherein equivalent protein forms are represented by single terms. In addition to the PRO ontology, the annotation of PRO terms is released as a separate PRO association file, which contains, for each given PRO term, an annotation from the experimentally characterized sub-types as well as the corresponding database identifiers and sequence coordinates. The annotations are added in the form of relationship to other ontologies. Whenever possible, equivalent forms in other species are listed to facilitate cross-species comparison. Splice and allelic variants, gene fusion products and modified protein forms are all represented as entities in the ontology. Therefore, PRO provides for the representation of protein entities and a resource for describing the associated data. This makes PRO useful both for proteomics studies where isoforms and modified forms must be differentiated, and for studies of biological pathways, where representations need to take account of the different ways in which the cascade of events may depend on specific protein modifications. PRO provides a framework for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses.

  14. Intelligent search in Big Data

    NASA Astrophysics Data System (ADS)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  15. Interoperability between phenotype and anatomy ontologies.

    PubMed

    Hoehndorf, Robert; Oellrich, Anika; Rebholz-Schuhmann, Dietrich

    2010-12-15

    Phenotypic information is important for the analysis of the molecular mechanisms underlying disease. A formal ontological representation of phenotypic information can help to identify, interpret and infer phenotypic traits based on experimental findings. The methods that are currently used to represent data and information about phenotypes fail to make the semantics of the phenotypic trait explicit and do not interoperate with ontologies of anatomy and other domains. Therefore, valuable resources for the analysis of phenotype studies remain unconnected and inaccessible to automated analysis and reasoning. We provide a framework to formalize phenotypic descriptions and make their semantics explicit. Based on this formalization, we provide the means to integrate phenotypic descriptions with ontologies of other domains, in particular anatomy and physiology. We demonstrate how our framework leads to the capability to represent disease phenotypes, perform powerful queries that were not possible before and infer additional knowledge. http://bioonto.de/pmwiki.php/Main/PheneOntology.

  16. Evaluation of research in biomedical ontologies

    PubMed Central

    Dumontier, Michel; Gkoutos, Georgios V.

    2013-01-01

    Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research. PMID:22962340

  17. A viewpoint-based case-based reasoning approach utilising an enterprise architecture ontology for experience management

    NASA Astrophysics Data System (ADS)

    Martin, Andreas; Emmenegger, Sandro; Hinkelmann, Knut; Thönssen, Barbara

    2017-04-01

    The accessibility of project knowledge obtained from experiences is an important and crucial issue in enterprises. This information need about project knowledge can be different from one person to another depending on the different roles he or she has. Therefore, a new ontology-based case-based reasoning (OBCBR) approach that utilises an enterprise ontology is introduced in this article to improve the accessibility of this project knowledge. Utilising an enterprise ontology improves the case-based reasoning (CBR) system through the systematic inclusion of enterprise-specific knowledge. This enterprise-specific knowledge is captured using the overall structure given by the enterprise ontology named ArchiMEO, which is a partial ontological realisation of the enterprise architecture framework (EAF) ArchiMate. This ontological representation, containing historical cases and specific enterprise domain knowledge, is applied in a new OBCBR approach. To support the different information needs of different stakeholders, this OBCBR approach has been built in such a way that different views, viewpoints, concerns and stakeholders can be considered. This is realised using a case viewpoint model derived from the ISO/IEC/IEEE 42010 standard. The introduced approach was implemented as a demonstrator and evaluated using an application case that has been elicited from a business partner in the Swiss research project.

  18. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  19. Ontology-based systematic representation and analysis of traditional Chinese drugs against rheumatism.

    PubMed

    Liu, Qingping; Wang, Jiahao; Zhu, Yan; He, Yongqun

    2017-12-21

    Rheumatism represents any disease condition marked with inflammation and pain in the joints, muscles, or connective tissues. Many traditional Chinese drugs have been used for a long time to treat rheumatism. However, a comprehensive information source for these drugs is still missing, and their anti-rheumatism mechanisms remain unclear. An ontology for anti-rheumatism traditional Chinese drugs would strongly support the representation, analysis, and understanding of these drugs. In this study, we first systematically collected reported information about 26 traditional Chinese decoction pieces drugs, including their chemical ingredients and adverse events (AEs). By mostly reusing terms from existing ontologies (e.g., TCMDPO for traditional Chinese medicines, NCBITaxon for taxonomy, ChEBI for chemical elements, and OAE for adverse events) and making semantic axioms linking different entities, we developed the Ontology of Chinese Medicine for Rheumatism (OCMR) that includes over 3000 class terms. Our OCMR analysis found that these 26 traditional Chinese decoction pieces are made from anatomic entities (e.g., root and stem) from 3 Bilateria animals and 23 Mesangiospermae plants. Anti-inflammatory and antineoplastic roles are important for anti-rheumatism drugs. Using the total of 555 unique ChEBI chemical entities identified from these drugs, our ChEBI-based classification analysis identified 18 anti-inflammatory, 33 antineoplastic chemicals, and 9 chemicals (including 3 diterpenoids and 3 triterpenoids) having both anti-inflammatory and antineoplastic roles. Furthermore, our study detected 22 diterpenoids and 23 triterpenoids, including 16 pentacyclic triterpenoids that are likely bioactive against rheumatism. Six drugs were found to be associated with 184 unique AEs, including three AEs (i.e., dizziness, nausea and vomiting, and anorexia) each associated with 5 drugs. Several chemical entities are classified as neurotoxins (e.g., diethyl phthalate) and allergens (e.g., eugenol), which may explain the formation of some TCD AEs. The OCMR could be efficiently queried for useful information using SPARQL scripts. The OCMR ontology was developed to systematically represent 26 traditional anti-rheumatism Chinese drugs and their related information. The OCMR analysis identified possible anti-rheumatism and AE mechanisms of these drugs. Our novel ontology-based approach can also be applied to systematic representation and analysis of other traditional Chinese drugs.

  20. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.

  1. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia

    PubMed Central

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.

    2015-01-01

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745

  2. Ontology-based Vaccine and Drug Adverse Event Representation and Theory-guided Systematic Causal Network Analysis toward Integrative Pharmacovigilance Research.

    PubMed

    He, Yongqun

    2016-06-01

    Compared with controlled terminologies ( e.g. , MedDRA, CTCAE, and WHO-ART), the community-based Ontology of AEs (OAE) has many advantages in adverse event (AE) classifications. The OAE-derived Ontology of Vaccine AEs (OVAE) and Ontology of Drug Neuropathy AEs (ODNAE) serve as AE knowledge bases and support data integration and analysis. The Immune Response Gene Network Theory explains molecular mechanisms of vaccine-related AEs. The OneNet Theory of Life treats the whole process of a life of an organism as a single complex and dynamic network ( i.e. , OneNet). A new "OneNet effectiveness" tenet is proposed here to expand the OneNet theory. Derived from the OneNet theory, the author hypothesizes that one human uses one single genotype-rooted mechanism to respond to different vaccinations and drug treatments, and experimentally identified mechanisms are manifestations of the OneNet blueprint mechanism under specific conditions. The theories and ontologies interact together as semantic frameworks to support integrative pharmacovigilance research.

  3. CELDA – an ontology for the comprehensive representation of cells in complex systems

    PubMed Central

    2013-01-01

    Background The need for detailed description and modeling of cells drives the continuous generation of large and diverse datasets. Unfortunately, there exists no systematic and comprehensive way to organize these datasets and their information. CELDA (Cell: Expression, Localization, Development, Anatomy) is a novel ontology for the association of primary experimental data and derived knowledge to various types of cells of organisms. Results CELDA is a structure that can help to categorize cell types based on species, anatomical localization, subcellular structures, developmental stages and origin. It targets cells in vitro as well as in vivo. Instead of developing a novel ontology from scratch, we carefully designed CELDA in such a way that existing ontologies were integrated as much as possible, and only minimal extensions were performed to cover those classes and areas not present in any existing model. Currently, ten existing ontologies and models are linked to CELDA through the top-level ontology BioTop. Together with 15.439 newly created classes, CELDA contains more than 196.000 classes and 233.670 relationship axioms. CELDA is primarily used as a representational framework for modeling, analyzing and comparing cells within and across species in CellFinder, a web based data repository on cells (http://cellfinder.org). Conclusions CELDA can semantically link diverse types of information about cell types. It has been integrated within the research platform CellFinder, where it exemplarily relates cell types from liver and kidney during development on the one hand and anatomical locations in humans on the other, integrating information on all spatial and temporal stages. CELDA is available from the CellFinder website: http://cellfinder.org/about/ontology. PMID:23865855

  4. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    NASA Astrophysics Data System (ADS)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  5. Formalizing nursing knowledge: from theories and models to ontologies.

    PubMed

    Peace, Jane; Brennan, Patricia Flatley

    2009-01-01

    Knowledge representation in nursing is poised to address the depth of nursing knowledge about the specific phenomena of importance to nursing. Nursing theories and models may provide a starting point for making this knowledge explicit in representations. We combined knowledge building methods from nursing and ontology design methods from biomedical informatics to create a nursing representation of family health history. Our experience provides an example of how knowledge representations may be created to facilitate electronic support for nursing practice and knowledge development.

  6. Modular Knowledge Representation and Reasoning in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Serafini, Luciano; Homola, Martin

    Construction of modular ontologies by combining different modules is becoming a necessity in ontology engineering in order to cope with the increasing complexity of the ontologies and the domains they represent. The modular ontology approach takes inspiration from software engineering, where modularization is a widely acknowledged feature. Distributed reasoning is the other side of the coin of modular ontologies: given an ontology comprising of a set of modules, it is desired to perform reasoning by combination of multiple reasoning processes performed locally on each of the modules. In the last ten years, a number of approaches for combining logics has been developed in order to formalize modular ontologies. In this chapter, we survey and compare the main formalisms for modular ontologies and distributed reasoning in the Semantic Web. We select four formalisms build on formal logical grounds of Description Logics: Distributed Description Logics, ℰ-connections, Package-based Description Logics and Integrated Distributed Description Logics. We concentrate on expressivity and distinctive modeling features of each framework. We also discuss reasoning capabilities of each framework.

  7. Ontology-based knowledge representation for resolution of semantic heterogeneity in GIS

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Xiao, Han; Wang, Limin; Han, Jialing

    2017-07-01

    Lack of semantic interoperability in geographical information systems has been identified as the main obstacle for data sharing and database integration. The new method should be found to overcome the problems of semantic heterogeneity. Ontologies are considered to be one approach to support geographic information sharing. This paper presents an ontology-driven integration approach to help in detecting and possibly resolving semantic conflicts. Its originality is that each data source participating in the integration process contains an ontology that defines the meaning of its own data. This approach ensures the automation of the integration through regulation of semantic integration algorithm. Finally, land classification in field GIS is described as the example.

  8. Using Ontologies for Knowledge Management: An Information Systems Perspective.

    ERIC Educational Resources Information Center

    Jurisica, Igor; Mylopoulos, John; Yu, Eric

    1999-01-01

    Surveys some of the basic concepts that have been used in computer science for the representation of knowledge and summarizes some of their advantages and drawbacks. Relates these techniques to information sciences theory and practice. Concepts are classified in four broad ontological categories: static ontology, dynamic ontology, intentional…

  9. Assessment Applications of Ontologies.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Niemi, David; Bewley, William L.

    This paper discusses the use of ontologies and their applications to assessment. An ontology provides a shared and common understanding of a domain that can be communicated among people and computational systems. The ontology captures one or more experts' conceptual representation of a domain expressed in terms of concepts and the relationships…

  10. OWL representation of the geologic timescale implementing stratigraphic best practice

    NASA Astrophysics Data System (ADS)

    Cox, S. J.

    2011-12-01

    The geologic timescale is a cornerstone of the earth sciences. Versions are available from many sources, with the following being of particular interest: (i) The official International Stratigraphic Chart (ISC) is maintained by the International Commission for Stratigraphy (ICS), following principles developed over the last 40 years. ICS provides the data underlying the chart as part of a specialized software package, and the chart itself as a PDF using the standard colours; (ii) ITC Enschede has developed a representation of the timescale as a thesaurus in SKOS, used in a Web Map Service delivery system; (iii) JPL's SWEET ontology includes a geologic timescale. This takes full advantage of the capabilities of OWL. However, each of these has limitations - The ISC falls down because of incompatibility with web technologies; - While SKOS supports multilingual labelling, SKOS does not adequately support timescale semantics, in particular since it does not include ordering relationships; - The SWEET version (as of version 2) is not fully aligned to the model used by ICS, in particular not recognizing the role of the Global Boundary Stratotype Sections and Point (GSSP). Furthermore, it is distributed as static documents, rather than through a dynamic API using SPARQL. The representation presented in this paper overcomes all of these limitations as follows: - the timescale model is formulated as an OWL ontology - the ontology is directly derived from the UML representation of the ICS best practice proposed by Cox & Richard [2005], and subsequently included as the Geologic Timescale package in GeoSciML (http://www.geosciml.org); this includes links to GSSPs as per the ICS process - key properties in the ontology are also asserted to be subProperties of SKOS properties (topConcept and broader/narrower relations) in order to support SKOS-based queries; SKOS labelling is used to support multi-lingual naming and synonyms - the International Stratigraphic Chart is implemented as a set of instances of classes from the ontology, and published through a SPARQL end-point - the elements of the Stratigraphic chart are linked to the corresponding elements in SWEET (Raskin et al., 2011) and DBpedia to support traceability between different commonly accessed representations. The ontology builds on standard geospatial information models, including the Observations and Measurements model (ISO 19156), and GeoSciML. This allows the ages given in the chart to be linked to the evidence basis found in the associated GeoSciML features.

  11. Onto-clust--a methodology for combining clustering analysis and ontological methods for identifying groups of comorbidities for developmental disorders.

    PubMed

    Peleg, Mor; Asbeh, Nuaman; Kuflik, Tsvi; Schertz, Mitchell

    2009-02-01

    Children with developmental disorders usually exhibit multiple developmental problems (comorbidities). Hence, such diagnosis needs to revolve on developmental disorder groups. Our objective is to systematically identify developmental disorder groups and represent them in an ontology. We developed a methodology that combines two methods (1) a literature-based ontology that we created, which represents developmental disorders and potential developmental disorder groups, and (2) clustering for detecting comorbid developmental disorders in patient data. The ontology is used to interpret and improve clustering results and the clustering results are used to validate the ontology and suggest directions for its development. We evaluated our methodology by applying it to data of 1175 patients from a child development clinic. We demonstrated that the ontology improves clustering results, bringing them closer to an expert generated gold-standard. We have shown that our methodology successfully combines an ontology with a clustering method to support systematic identification and representation of developmental disorder groups.

  12. Use artificial neural network to align biological ontologies.

    PubMed

    Huang, Jingshan; Dang, Jiangbo; Huhns, Michael N; Zheng, W Jim

    2008-09-16

    Being formal, declarative knowledge representation models, ontologies help to address the problem of imprecise terminologies in biological and biomedical research. However, ontologies constructed under the auspices of the Open Biomedical Ontologies (OBO) group have exhibited a great deal of variety, because different parties can design ontologies according to their own conceptual views of the world. It is therefore becoming critical to align ontologies from different parties. During automated/semi-automated alignment across biological ontologies, different semantic aspects, i.e., concept name, concept properties, and concept relationships, contribute in different degrees to alignment results. Therefore, a vector of weights must be assigned to these semantic aspects. It is not trivial to determine what those weights should be, and current methodologies depend a lot on human heuristics. In this paper, we take an artificial neural network approach to learn and adjust these weights, and thereby support a new ontology alignment algorithm, customized for biological ontologies, with the purpose of avoiding some disadvantages in both rule-based and learning-based aligning algorithms. This approach has been evaluated by aligning two real-world biological ontologies, whose features include huge file size, very few instances, concept names in numerical strings, and others. The promising experiment results verify our proposed hypothesis, i.e., three weights for semantic aspects learned from a subset of concepts are representative of all concepts in the same ontology. Therefore, our method represents a large leap forward towards automating biological ontology alignment.

  13. Semantics-driven modelling of user preferences for information retrieval in the biomedical domain.

    PubMed

    Gladun, Anatoly; Rogushina, Julia; Valencia-García, Rafael; Béjar, Rodrigo Martínez

    2013-03-01

    A large amount of biomedical and genomic data are currently available on the Internet. However, data are distributed into heterogeneous biological information sources, with little or even no organization. Semantic technologies provide a consistent and reliable basis with which to confront the challenges involved in the organization, manipulation and visualization of data and knowledge. One of the knowledge representation techniques used in semantic processing is the ontology, which is commonly defined as a formal and explicit specification of a shared conceptualization of a domain of interest. The work presented here introduces a set of interoperable algorithms that can use domain and ontological information to improve information-retrieval processes. This work presents an ontology-based information-retrieval system for the biomedical domain. This system, with which some experiments have been carried out that are described in this paper, is based on the use of domain ontologies for the creation and normalization of lightweight ontologies that represent user preferences in a determined domain in order to improve information-retrieval processes.

  14. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation

    DOE PAGES

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E.; ...

    2016-09-23

    Background: The Environment Ontology (ENVO; http://www.environmentontology.org/), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. Methods: We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVOmore » in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Results: Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl. Conclusions: ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.« less

  15. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E.

    Background: The Environment Ontology (ENVO; http://www.environmentontology.org/), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. Methods: We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVOmore » in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Results: Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl. Conclusions: ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.« less

  16. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation.

    PubMed

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E; Schildhauer, Mark P; Walls, Ramona L; Mungall, Christopher J

    2016-09-23

    The Environment Ontology (ENVO; http://www.environmentontology.org/ ), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVO in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl . ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.

  17. The Gene Ontology of eukaryotic cilia and flagella.

    PubMed

    Roncaglia, Paola; van Dam, Teunis J P; Christie, Karen R; Nacheva, Lora; Toedt, Grischa; Huynen, Martijn A; Huntley, Rachael P; Gibson, Toby J; Lomax, Jane

    2017-01-01

    Recent research into ciliary structure and function provides important insights into inherited diseases termed ciliopathies and other cilia-related disorders. This wealth of knowledge needs to be translated into a computational representation to be fully exploitable by the research community. To this end, members of the Gene Ontology (GO) and SYSCILIA Consortia have worked together to improve representation of ciliary substructures and processes in GO. Members of the SYSCILIA and Gene Ontology Consortia suggested additions and changes to GO, to reflect new knowledge in the field. The project initially aimed to improve coverage of ciliary parts, and was then broadened to cilia-related biological processes. Discussions were documented in a public tracker. We engaged the broader cilia community via direct consultation and by referring to the literature. Ontology updates were implemented via ontology editing tools. So far, we have created or modified 127 GO terms representing parts and processes related to eukaryotic cilia/flagella or prokaryotic flagella. A growing number of biological pathways are known to involve cilia, and we continue to incorporate this knowledge in GO. The resulting expansion in GO allows more precise representation of experimentally derived knowledge, and SYSCILIA and GO biocurators have created 199 annotations to 50 human ciliary proteins. The revised ontology was also used to curate mouse proteins in a collaborative project. The revised GO and annotations, used in comparative 'before and after' analyses of representative ciliary datasets, improve enrichment results significantly. Our work has resulted in a broader and deeper coverage of ciliary composition and function. These improvements in ontology and protein annotation will benefit all users of GO enrichment analysis tools, as well as the ciliary research community, in areas ranging from microscopy image annotation to interpretation of high-throughput studies. We welcome feedback to further enhance the representation of cilia biology in GO.

  18. Ontology-based Vaccine and Drug Adverse Event Representation and Theory-guided Systematic Causal Network Analysis toward Integrative Pharmacovigilance Research

    PubMed Central

    He, Yongqun

    2016-01-01

    Compared with controlled terminologies (e.g., MedDRA, CTCAE, and WHO-ART), the community-based Ontology of AEs (OAE) has many advantages in adverse event (AE) classifications. The OAE-derived Ontology of Vaccine AEs (OVAE) and Ontology of Drug Neuropathy AEs (ODNAE) serve as AE knowledge bases and support data integration and analysis. The Immune Response Gene Network Theory explains molecular mechanisms of vaccine-related AEs. The OneNet Theory of Life treats the whole process of a life of an organism as a single complex and dynamic network (i.e., OneNet). A new “OneNet effectiveness” tenet is proposed here to expand the OneNet theory. Derived from the OneNet theory, the author hypothesizes that one human uses one single genotype-rooted mechanism to respond to different vaccinations and drug treatments, and experimentally identified mechanisms are manifestations of the OneNet blueprint mechanism under specific conditions. The theories and ontologies interact together as semantic frameworks to support integrative pharmacovigilance research. PMID:27458549

  19. ExO: An Ontology for Exposure Science

    EPA Science Inventory

    An ontology is a formal representation of knowledge within a domain and typically consists of classes, the properties of those classes, and the relationships between them. Ontologies are critically important for specifying data of interest in a consistent manner, thereby enablin...

  20. A fuzzy-ontology-oriented case-based reasoning framework for semantic diabetes diagnosis.

    PubMed

    El-Sappagh, Shaker; Elmogy, Mohammed; Riad, A M

    2015-11-01

    Case-based reasoning (CBR) is a problem-solving paradigm that uses past knowledge to interpret or solve new problems. It is suitable for experience-based and theory-less problems. Building a semantically intelligent CBR that mimic the expert thinking can solve many problems especially medical ones. Knowledge-intensive CBR using formal ontologies is an evolvement of this paradigm. Ontologies can be used for case representation and storage, and it can be used as a background knowledge. Using standard medical ontologies, such as SNOMED CT, enhances the interoperability and integration with the health care systems. Moreover, utilizing vague or imprecise knowledge further improves the CBR semantic effectiveness. This paper proposes a fuzzy ontology-based CBR framework. It proposes a fuzzy case-base OWL2 ontology, and a fuzzy semantic retrieval algorithm that handles many feature types. This framework is implemented and tested on the diabetes diagnosis problem. The fuzzy ontology is populated with 60 real diabetic cases. The effectiveness of the proposed approach is illustrated with a set of experiments and case studies. The resulting system can answer complex medical queries related to semantic understanding of medical concepts and handling of vague terms. The resulting fuzzy case-base ontology has 63 concepts, 54 (fuzzy) object properties, 138 (fuzzy) datatype properties, 105 fuzzy datatypes, and 2640 instances. The system achieves an accuracy of 97.67%. We compare our framework with existing CBR systems and a set of five machine-learning classifiers; our system outperforms all of these systems. Building an integrated CBR system can improve its performance. Representing CBR knowledge using the fuzzy ontology and building a case retrieval algorithm that treats different features differently improves the accuracy of the resulting systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Alignment of the UMLS semantic network with BioTop: methodology and assessment.

    PubMed

    Schulz, Stefan; Beisswanger, Elena; van den Hoek, László; Bodenreider, Olivier; van Mulligen, Erik M

    2009-06-15

    For many years, the Unified Medical Language System (UMLS) semantic network (SN) has been used as an upper-level semantic framework for the categorization of terms from terminological resources in biomedicine. BioTop has recently been developed as an upper-level ontology for the biomedical domain. In contrast to the SN, it is founded upon strict ontological principles, using OWL DL as a formal representation language, which has become standard in the semantic Web. In order to make logic-based reasoning available for the resources annotated or categorized with the SN, a mapping ontology was developed aligning the SN with BioTop. The theoretical foundations and the practical realization of the alignment are being described, with a focus on the design decisions taken, the problems encountered and the adaptations of BioTop that became necessary. For evaluation purposes, UMLS concept pairs obtained from MEDLINE abstracts by a named entity recognition system were tested for possible semantic relationships. Furthermore, all semantic-type combinations that occur in the UMLS Metathesaurus were checked for satisfiability. The effort-intensive alignment process required major design changes and enhancements of BioTop and brought up several design errors that could be fixed. A comparison between a human curator and the ontology yielded only a low agreement. Ontology reasoning was also used to successfully identify 133 inconsistent semantic-type combinations. BioTop, the OWL DL representation of the UMLS SN, and the mapping ontology are available at http://www.purl.org/biotop/.

  2. A knowledge-driven approach to biomedical document conceptualization.

    PubMed

    Zheng, Hai-Tao; Borchert, Charles; Jiang, Yong

    2010-06-01

    Biomedical document conceptualization is the process of clustering biomedical documents based on ontology-represented domain knowledge. The result of this process is the representation of the biomedical documents by a set of key concepts and their relationships. Most of clustering methods cluster documents based on invariant domain knowledge. The objective of this work is to develop an effective method to cluster biomedical documents based on various user-specified ontologies, so that users can exploit the concept structures of documents more effectively. We develop a flexible framework to allow users to specify the knowledge bases, in the form of ontologies. Based on the user-specified ontologies, we develop a key concept induction algorithm, which uses latent semantic analysis to identify key concepts and cluster documents. A corpus-related ontology generation algorithm is developed to generate the concept structures of documents. Based on two biomedical datasets, we evaluate the proposed method and five other clustering algorithms. The clustering results of the proposed method outperform the five other algorithms, in terms of key concept identification. With respect to the first biomedical dataset, our method has the F-measure values 0.7294 and 0.5294 based on the MeSH ontology and gene ontology (GO), respectively. With respect to the second biomedical dataset, our method has the F-measure values 0.6751 and 0.6746 based on the MeSH ontology and GO, respectively. Both results outperforms the five other algorithms in terms of F-measure. Based on the MeSH ontology and GO, the generated corpus-related ontologies show informative conceptual structures. The proposed method enables users to specify the domain knowledge to exploit the conceptual structures of biomedical document collections. In addition, the proposed method is able to extract the key concepts and cluster the documents with a relatively high precision. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Ontology for assessment studies of human-computer-interaction in surgery.

    PubMed

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks. Furthermore, it allows to acquire the structured description of the applied assessment methods within a certain surgical domain and to consider this information for own study design or to perform a comparison of different studies. The investigation model and the corresponding ontology can be used further to create new knowledge bases of HCI assessment in surgery. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. MedSynDiKATe--design considerations for an ontology-based medical text understanding system.

    PubMed Central

    Hahn, U.; Romacker, M.; Schulz, S.

    2000-01-01

    MedSynDiKATe is a natural language processor for automatically acquiring knowledge from medical finding reports. The content of these documents is transferred to formal representation structures which constitute a corresponding text knowledge base. The general system architecture we present integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. The strong demands MedSynDiKATe poses to the availability of expressive knowledge sources are accounted for by two alternative approaches to (semi)automatic ontology engineering. PMID:11079899

  5. GlycoRDF: an ontology to standardize glycomics data in RDF

    PubMed Central

    Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F.; Campbell, Matthew P.; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi

    2015-01-01

    Motivation: Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. Results: An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. Availability and implementation: The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). Contact: rene@ccrc.uga.edu or kkiyoko@soka.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25388145

  6. GlycoRDF: an ontology to standardize glycomics data in RDF.

    PubMed

    Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi

    2015-03-15

    Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. IDEF5 Ontology Description Capture Method: Concept Paper

    NASA Technical Reports Server (NTRS)

    Menzel, Christopher P.; Mayer, Richard J.

    1990-01-01

    The results of research towards an ontology capture method referred to as IDEF5 are presented. Viewed simply as the study of what exists in a domain, ontology is an activity that can be understood to be at work across the full range of human inquiry prompted by the persistent effort to understand the world in which it has found itself - and which it has helped to shape. In the contest of information management, ontology is the task of extracting the structure of a given engineering, manufacturing, business, or logistical domain and storing it in an usable representational medium. A key to effective integration is a system ontology that can be accessed and modified across domains and which captures common features of the overall system relevant to the goals of the disparate domains. If the focus is on information integration, then the strongest motivation for ontology comes from the need to support data sharing and function interoperability. In the correct architecture, an enterprise ontology base would allow th e construction of an integrated environment in which legacy systems appear to be open architecture integrated resources. If the focus is on system/software development, then support for the rapid acquisition of reliable systems is perhaps the strongest motivation for ontology. Finally, ontological analysis was demonstrated to be an effective first step in the construction of robust knowledge based systems.

  8. Mainstream web standards now support science data too

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Cox, S. J. D.; Janowicz, K.; Fox, P. A.

    2017-12-01

    The science community has developed many models and ontologies for representation of scientific data and knowledge. In some cases these have been built as part of coordinated frameworks. For example, the biomedical communities OBO Foundry federates applications covering various aspects of life sciences, which are united through reference to a common foundational ontology (BFO). The SWEET ontology, originally developed at NASA and now governed through ESIP, is a single large unified ontology for earth and environmental sciences. On a smaller scale, GeoSciML provides a UML and corresponding XML representation of geological mapping and observation data. Some of the key concepts related to scientific data and observations have recently been incorporated into domain-neutral mainstream ontologies developed by the World Wide Web consortium through their Spatial Data on the Web working group (SDWWG). OWL-Time has been enhanced to support temporal reference systems needed for science, and has been deployed in a linked data representation of the International Chronostratigraphic Chart. The Semantic Sensor Network ontology has been extended to cover samples and sampling, including relationships between samples. Gridded data and time-series is supported by applications of the statistical data-cube ontology (QB) for earth observations (the EO-QB profile) and spatio-temporal data (QB4ST). These standard ontologies and encodings can be used directly for science data, or can provide a bridge to specialized domain ontologies. There are a number of advantages in alignment with the W3C standards. The W3C vocabularies use discipline-neutral language and thus support cross-disciplinary applications directly without complex mappings. The W3C vocabularies are already aligned with the core ontologies that are the building blocks of the semantic web. The W3C vocabularies are each tightly scoped thus encouraging good practices in the combination of complementary small ontologies. The W3C vocabularies are hosted on well known, reliable infrastructure. The W3C SDWWG outputs are being selectively adopted by the general schema.org discovery framework.

  9. Developing a modular architecture for creation of rule-based clinical diagnostic criteria.

    PubMed

    Hong, Na; Pathak, Jyotishman; Chute, Christopher G; Jiang, Guoqian

    2016-01-01

    With recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies. The architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation. Our efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.

  10. An Ontology for Representing Geoscience Theories and Related Knowledge

    NASA Astrophysics Data System (ADS)

    Brodaric, B.

    2009-12-01

    Online scientific research, or e-science, is increasingly reliant on machine-readable representations of scientific data and knowledge. At present, much of the knowledge is represented in ontologies, which typically contain geoscience categories such as ‘water body’, ‘aquifer’, ‘granite’, ‘temperature’, ‘density’, ‘Co2’. While extremely useful for many e-science activities, such categorical representations constitute only a fragment of geoscience knowledge. Also needed are online representations of elements such as geoscience theories, to enable geoscientists to pose and evaluate hypotheses online. To address this need, the Science Knowledge Infrastructure ontology (SKIo) specializes the DOLCE foundational ontology with basic science knowledge primitives such as theory, model, observation, and prediction. Discussed will be SKIo as well as its implementation in the geosciences, including case studies from marine science, environmental science, and geologic mapping. These case studies demonstrate SKIo’s ability to represent a wide spectrum of geoscience knowledge types, to help fuel next generation e-science.

  11. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  12. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    PubMed

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  13. Complex Topographic Feature Ontology Patterns

    USGS Publications Warehouse

    Varanka, Dalia E.; Jerris, Thomas J.

    2015-01-01

    Semantic ontologies are examined as effective data models for the representation of complex topographic feature types. Complex feature types are viewed as integrated relations between basic features for a basic purpose. In the context of topographic science, such component assemblages are supported by resource systems and found on the local landscape. Ontologies are organized within six thematic modules of a domain ontology called Topography that includes within its sphere basic feature types, resource systems, and landscape types. Context is constructed not only as a spatial and temporal setting, but a setting also based on environmental processes. Types of spatial relations that exist between components include location, generative processes, and description. An example is offered in a complex feature type ‘mine.’ The identification and extraction of complex feature types are an area for future research.

  14. A Process for the Representation of openEHR ADL Archetypes in OWL Ontologies.

    PubMed

    Porn, Alex Mateus; Peres, Leticia Mara; Didonet Del Fabro, Marcos

    2015-01-01

    ADL is a formal language to express archetypes, independent of standards or domain. However, its specification is not precise enough in relation to the specialization and semantic of archetypes, presenting difficulties in implementation and a few available tools. Archetypes may be implemented using other languages such as XML or OWL, increasing integration with Semantic Web tools. Exchanging and transforming data can be better implemented with semantics oriented models, for example using OWL which is a language to define and instantiate Web ontologies defined by W3C. OWL permits defining significant, detailed, precise and consistent distinctions among classes, properties and relations by the user, ensuring the consistency of knowledge than using ADL techniques. This paper presents a process of an openEHR ADL archetypes representation in OWL ontologies. This process consists of ADL archetypes conversion in OWL ontologies and validation of OWL resultant ontologies using the mutation test.

  15. Inferring ontology graph structures using OWL reasoning.

    PubMed

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  16. Ontology and modeling patterns for state-based behavior representation

    NASA Technical Reports Server (NTRS)

    Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.; hide

    2015-01-01

    This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.

  17. A Working Framework for Enabling International Science Data System Interoperability

    NASA Astrophysics Data System (ADS)

    Hughes, J. Steven; Hardman, Sean; Crichton, Daniel J.; Martinez, Santa; Law, Emily; Gordon, Mitchell K.

    2016-07-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework that leverages ISO level reference models for metadata registries and digital archives. This framework provides multi-level governance, evolves independent of the implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation is captured in an ontology through a process of knowledge acquisition. Discipline experts in the role of stewards at the common, discipline, and project levels work to design and populate the ontology model. The result is a formal and consistent knowledge base that provides requirements for data representation, integrity, provenance, context, identification, and relationship. The contents of the knowledge base are translated and written to files in suitable formats to configure system software and services, provide user documentation, validate input, and support data analytics. This presentation will provide an overview of the framework, present a use case that has been adopted by an entire science discipline at the international level, and share some important lessons learned.

  18. Representation of Biomedical Expertise in Ontologies: a Case Study about Knowledge Acquisition on HTLV viruses and their clinical manifestations.

    PubMed

    Cardoso Coelho, Kátia; Barcellos Almeida, Maurício

    2015-01-01

    In this paper, we introduce a set of methodological steps for knowledge acquisition applied to the organization of biomedical information through ontologies. Those steps are tested in a real case involving Human T Cell Lymphotropic Virus (HTLV), which causes myriad infectious diseases. We hope to contribute to providing suitable knowledge representation of scientific domains.

  19. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    PubMed Central

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384

  20. Validating Domain Ontologies: A Methodology Exemplified for Concept Maps

    ERIC Educational Resources Information Center

    Steiner, Christina M.; Albert, Dietrich

    2017-01-01

    Ontologies play an important role as knowledge domain representations in technology-enhanced learning and instruction. Represented in form of concept maps they are commonly used as teaching and learning material and have the potential to enhance positive educational outcomes. To ensure the effective use of an ontology representing a knowledge…

  1. Motivation and Organizational Principles for Anatomical Knowledge Representation

    PubMed Central

    Rosse, Cornelius; Mejino, José L.; Modayur, Bharath R.; Jakobovits, Rex; Hinshaw, Kevin P.; Brinkley, James F.

    1998-01-01

    Abstract Objective: Conceptualization of the physical objects and spaces that constitute the human body at the macroscopic level of organization, specified as a machine-parseable ontology that, in its human-readable form, is comprehensible to both expert and novice users of anatomical information. Design: Conceived as an anatomical enhancement of the UMLS Semantic Network and Metathesaurus, the anatomical ontology was formulated by specifying defining attributes and differentia for classes and subclasses of physical anatomical entities based on their partitive and spatial relationships. The validity of the classification was assessed by instantiating the ontology for the thorax. Several transitive relationships were used for symbolically modeling aspects of the physical organization of the thorax. Results: By declaring Organ as the macroscopic organizational unit of the body, and defining the entities that constitute organs and higher level entities constituted by organs, all anatomical entities could be assigned to one of three top level classes (Anatomical structure, Anatomical spatial entity and Body substance). The ontology accommodates both the systemic and regional (topographical) views of anatomy, as well as diverse clinical naming conventions of anatomical entities. Conclusions: The ontology formulated for the thorax is extendible to microscopic and cellular levels, as well as to other body parts, in that its classes subsume essentially all anatomical entities that constitute the body. Explicit definitions of these entities and their relationships provide the first requirement for standards in anatomical concept representation. Conceived from an anatomical viewpoint, the ontology can be generalized and mapped to other biomedical domains and problem solving tasks that require anatomical knowledge. PMID:9452983

  2. Linking rare and common disease: mapping clinical disease-phenotypes to ontologies in therapeutic target validation.

    PubMed

    Sarntivijai, Sirarat; Vasant, Drashtti; Jupp, Simon; Saunders, Gary; Bento, A Patrícia; Gonzalez, Daniel; Betts, Joanna; Hasan, Samiul; Koscielny, Gautier; Dunham, Ian; Parkinson, Helen; Malone, James

    2016-01-01

    The Centre for Therapeutic Target Validation (CTTV - https://www.targetvalidation.org/) was established to generate therapeutic target evidence from genome-scale experiments and analyses. CTTV aims to support the validity of therapeutic targets by integrating existing and newly-generated data. Data integration has been achieved in some resources by mapping metadata such as disease and phenotypes to the Experimental Factor Ontology (EFO). Additionally, the relationship between ontology descriptions of rare and common diseases and their phenotypes can offer insights into shared biological mechanisms and potential drug targets. Ontologies are not ideal for representing the sometimes associated type relationship required. This work addresses two challenges; annotation of diverse big data, and representation of complex, sometimes associated relationships between concepts. Semantic mapping uses a combination of custom scripting, our annotation tool 'Zooma', and expert curation. Disease-phenotype associations were generated using literature mining on Europe PubMed Central abstracts, which were manually verified by experts for validity. Representation of the disease-phenotype association was achieved by the Ontology of Biomedical AssociatioN (OBAN), a generic association representation model. OBAN represents associations between a subject and object i.e., disease and its associated phenotypes and the source of evidence for that association. The indirect disease-to-disease associations are exposed through shared phenotypes. This was applied to the use case of linking rare to common diseases at the CTTV. EFO yields an average of over 80% of mapping coverage in all data sources. A 42% precision is obtained from the manual verification of the text-mined disease-phenotype associations. This results in 1452 and 2810 disease-phenotype pairs for IBD and autoimmune disease and contributes towards 11,338 rare diseases associations (merged with existing published work [Am J Hum Genet 97:111-24, 2015]). An OBAN result file is downloadable at http://sourceforge.net/p/efo/code/HEAD/tree/trunk/src/efoassociations/. Twenty common diseases are linked to 85 rare diseases by shared phenotypes. A generalizable OBAN model for association representation is presented in this study. Here we present solutions to large-scale annotation-ontology mapping in the CTTV knowledge base, a process for disease-phenotype mining, and propose a generic association model, 'OBAN', as a means to integrate disease using shared phenotypes. EFO is released monthly and available for download at http://www.ebi.ac.uk/efo/.

  3. An approach to define semantics for BPM systems interoperability

    NASA Astrophysics Data System (ADS)

    Rico, Mariela; Caliusco, María Laura; Chiotti, Omar; Rosa Galli, María

    2015-04-01

    This article proposes defining semantics for Business Process Management systems interoperability through the ontology of Electronic Business Documents (EBD) used to interchange the information required to perform cross-organizational processes. The semantic model generated allows aligning enterprise's business processes to support cross-organizational processes by matching the business ontology of each business partner with the EBD ontology. The result is a flexible software architecture that allows dynamically defining cross-organizational business processes by reusing the EBD ontology. For developing the semantic model, a method is presented, which is based on a strategy for discovering entity features whose interpretation depends on the context, and representing them for enriching the ontology. The proposed method complements ontology learning techniques that can not infer semantic features not represented in data sources. In order to improve the representation of these entity features, the method proposes using widely accepted ontologies, for representing time entities and relations, physical quantities, measurement units, official country names, and currencies and funds, among others. When the ontologies reuse is not possible, the method proposes identifying whether that feature is simple or complex, and defines a strategy to be followed. An empirical validation of the approach has been performed through a case study.

  4. Knowledge representation and management enabling intelligent interoperability - principles and standards.

    PubMed

    Blobel, Bernd

    2013-01-01

    Based on the paradigm changes for health, health services and underlying technologies as well as the need for at best comprehensive and increasingly automated interoperability, the paper addresses the challenge of knowledge representation and management for medical decision support. After introducing related definitions, a system-theoretical, architecture-centric approach to decision support systems (DSSs) and appropriate ways for representing them using systems of ontologies is given. Finally, existing and emerging knowledge representation and management standards are presented. The paper focuses on the knowledge representation and management part of DSSs, excluding the reasoning part from consideration.

  5. Ontology of physics for biology: representing physical dependencies as a basis for biological processes.

    PubMed

    Cook, Daniel L; Neal, Maxwell L; Bookstein, Fred L; Gennari, John H

    2013-12-02

    In prior work, we presented the Ontology of Physics for Biology (OPB) as a computational ontology for use in the annotation and representations of biophysical knowledge encoded in repositories of physics-based biosimulation models. We introduced OPB:Physical entity and OPB:Physical property classes that extend available spatiotemporal representations of physical entities and processes to explicitly represent the thermodynamics and dynamics of physiological processes. Our utilitarian, long-term aim is to develop computational tools for creating and querying formalized physiological knowledge for use by multiscale "physiome" projects such as the EU's Virtual Physiological Human (VPH) and NIH's Virtual Physiological Rat (VPR). Here we describe the OPB:Physical dependency taxonomy of classes that represent of the laws of classical physics that are the "rules" by which physical properties of physical entities change during occurrences of physical processes. For example, the fluid analog of Ohm's law (as for electric currents) is used to describe how a blood flow rate depends on a blood pressure gradient. Hooke's law (as in elastic deformations of springs) is used to describe how an increase in vascular volume increases blood pressure. We classify such dependencies according to the flow, transformation, and storage of thermodynamic energy that occurs during processes governed by the dependencies. We have developed the OPB and annotation methods to represent the meaning-the biophysical semantics-of the mathematical statements of physiological analysis and the biophysical content of models and datasets. Here we describe and discuss our approach to an ontological representation of physical laws (as dependencies) and properties as encoded for the mathematical analysis of biophysical processes.

  6. CogSkillnet: An Ontology-Based Representation of Cognitive Skills

    ERIC Educational Resources Information Center

    Askar, Petek; Altun, Arif

    2009-01-01

    A number of studies emphasized the need to capture learners' interaction patterns in order to personalize their learning process as they study through learning objects. In education context, learning materials are designed based on pre-determined expectations and learners are evaluated to what extent they master these expectations. Representation…

  7. Semantics based approach for analyzing disease-target associations.

    PubMed

    Kaalia, Rama; Ghosh, Indira

    2016-08-01

    A complex disease is caused by heterogeneous biological interactions between genes and their products along with the influence of environmental factors. There have been many attempts for understanding the cause of these diseases using experimental, statistical and computational methods. In the present work the objective is to address the challenge of representation and integration of information from heterogeneous biomedical aspects of a complex disease using semantics based approach. Semantic web technology is used to design Disease Association Ontology (DAO-db) for representation and integration of disease associated information with diabetes as the case study. The functional associations of disease genes are integrated using RDF graphs of DAO-db. Three semantic web based scoring algorithms (PageRank, HITS (Hyperlink Induced Topic Search) and HITS with semantic weights) are used to score the gene nodes on the basis of their functional interactions in the graph. Disease Association Ontology for Diabetes (DAO-db) provides a standard ontology-driven platform for describing genes, proteins, pathways involved in diabetes and for integrating functional associations from various interaction levels (gene-disease, gene-pathway, gene-function, gene-cellular component and protein-protein interactions). An automatic instance loader module is also developed in present work that helps in adding instances to DAO-db on a large scale. Our ontology provides a framework for querying and analyzing the disease associated information in the form of RDF graphs. The above developed methodology is used to predict novel potential targets involved in diabetes disease from the long list of loose (statistically associated) gene-disease associations. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Basic-level and superordinate-like categorical representations in early infancy.

    PubMed

    Behl-Chadha, G

    1996-08-01

    A series of experiments using the paired-preference procedure examined 3- to 4-month-old infants' ability to form perceptually based categorical representations in the domains of natural kinds and artifacts and probed the underlying organizational structure of these representations. Experiments 1 and 2 found that infants could form categorical representations for chairs that excluded exemplars of couches, beds, and tables and also for couches that excluded exemplars of chairs, beds, and tables. Thus, the adult-like exclusivity shown by infants in the categorization of various animal pictures at the basic-level extends to the domain of artifacts as well--an ecologically significant ability given the numerous artifacts that populate the human environment. Experiments 3 and 4 examined infants' ability to form superordinate-like or global categorical representations for mammals and furniture. It was found that infants could form a global representation for mammals that included novel mammals and excluded other non-mammalian animals such as birds and fish as well as items from cross-ontological categories such as furniture. In addition, it was found that infants formed a representation for furniture that included novel categories of furniture and excluded exemplars from the cross-ontological category of mammals; however, it was less clear if infants' global representation for furniture also excluded other artifacts such as vehicles and thus the category of furniture may have been less exclusively represented. Overall, the present findings, by showing the availability of perceptually driven basic and superordinate-like representations in early infancy that closely correspond to adult conceptual categories, underscore the importance of these early representations for later conceptual representations.

  9. A common layer of interoperability for biomedical ontologies based on OWL EL.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Wimalaratne, Sarala; Rebholz-Schuhmann, Dietrich; Schofield, Paul; Gkoutos, Georgios V

    2011-04-01

    Ontologies are essential in biomedical research due to their ability to semantically integrate content from different scientific databases and resources. Their application improves capabilities for querying and mining biological knowledge. An increasing number of ontologies is being developed for this purpose, and considerable effort is invested into formally defining them in order to represent their semantics explicitly. However, current biomedical ontologies do not facilitate data integration and interoperability yet, since reasoning over these ontologies is very complex and cannot be performed efficiently or is even impossible. We propose the use of less expressive subsets of ontology representation languages to enable efficient reasoning and achieve the goal of genuine interoperability between ontologies. We present and evaluate EL Vira, a framework that transforms OWL ontologies into the OWL EL subset, thereby enabling the use of tractable reasoning. We illustrate which OWL constructs and inferences are kept and lost following the conversion and demonstrate the performance gain of reasoning indicated by the significant reduction of processing time. We applied EL Vira to the open biomedical ontologies and provide a repository of ontologies resulting from this conversion. EL Vira creates a common layer of ontological interoperability that, for the first time, enables the creation of software solutions that can employ biomedical ontologies to perform inferences and answer complex queries to support scientific analyses. The EL Vira software is available from http://el-vira.googlecode.com and converted OBO ontologies and their mappings are available from http://bioonto.gen.cam.ac.uk/el-ont.

  10. An Ontology-based Architecture for Integration of Clinical Trials Management Applications

    PubMed Central

    Shankar, Ravi D.; Martins, Susana B.; O’Connor, Martin; Parrish, David B.; Das, Amar K.

    2007-01-01

    Management of complex clinical trials involves coordinated-use of a myriad of software applications by trial personnel. The applications typically use distinct knowledge representations and generate enormous amount of information during the course of a trial. It becomes vital that the applications exchange trial semantics in order for efficient management of the trials and subsequent analysis of clinical trial data. Existing model-based frameworks do not address the requirements of semantic integration of heterogeneous applications. We have built an ontology-based architecture to support interoperation of clinical trial software applications. Central to our approach is a suite of clinical trial ontologies, which we call Epoch, that define the vocabulary and semantics necessary to represent information on clinical trials. We are continuing to demonstrate and validate our approach with different clinical trials management applications and with growing number of clinical trials. PMID:18693919

  11. Share Repository Framework: Component Specification and Otology

    DTIC Science & Technology

    2008-04-23

    Palantir Technologies has created one such software application to support the DoD intelligence community by providing robust capabilities for...managing data from various sources. The Palantir tool is based on user-defined ontologies and supports multiple representation and analysis tools

  12. Supporting Multi-view User Ontology to Understand Company Value Chains

    NASA Astrophysics Data System (ADS)

    Zuo, Landong; Salvadores, Manuel; Imtiaz, Sm Hazzaz; Darlington, John; Gibbins, Nicholas; Shadbolt, Nigel R.; Dobree, James

    The objective of the Market Blended Insight (MBI) project is to develop web based techniques to improve the performance of UK Business to Business (B2B) marketing activities. The analysis of company value chains is a fundamental task within MBI because it is an important model for understanding the market place and the company interactions within it. The project has aggregated rich data profiles of 3.7 million companies that form the active UK business community. The profiles are augmented by Web extractions from heterogeneous sources to provide unparalleled business insight. Advances by the Semantic Web in knowledge representation and logic reasoning allow flexible integration of data from heterogeneous sources, transformation between different representations and reasoning about their meaning. The MBI project has identified that the market insight and analysis interests of different types of users are difficult to maintain using a single domain ontology. Therefore, the project has developed a technique to undertake a plurality of analyses of value chains by deploying a distributed multi-view ontology to capture different user views over the classification of companies and their various relationships.

  13. Knowledge Representation and Ontologies

    NASA Astrophysics Data System (ADS)

    Grimm, Stephan

    Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

  14. Development and Evaluation of an Adolescents' Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2016-01-01

    This study aims to develop and evaluate an ontology for adolescents' depression to be used for collecting and analyzing social data. The ontology was developed according to the 'ontology development 101' methodology. Concepts were extracted from clinical practice guidelines and related literatures. The ontology is composed of five sub-ontologies which represent risk factors, sign and symptoms, measurement, diagnostic result and management care. The ontology was evaluated in four different ways: First, we examined the frequency of ontology concept appeared in social data; Second, the content coverage of ontology was evaluated by comparing ontology concepts with concepts extracted from the youth depression counseling records; Third, the structural and representational layer of the ontology were evaluated by 5 ontology and psychiatric nursing experts; Fourth, the scope of the ontology was examined by answering 59 competency questions. The ontology was improved by adding new concepts and synonyms and revising the level of structure.

  15. Integrating HL7 RIM and ontology for unified knowledge and data representation in clinical decision support systems.

    PubMed

    Zhang, Yi-Fan; Tian, Yu; Zhou, Tian-Shu; Araki, Kenji; Li, Jing-Song

    2016-01-01

    The broad adoption of clinical decision support systems within clinical practice has been hampered mainly by the difficulty in expressing domain knowledge and patient data in a unified formalism. This paper presents a semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications. A four-phase knowledge engineering cycle is implemented to develop a semantic healthcare knowledge base based on an HL7 reference information model, including an ontology to model domain knowledge and patient data and an expression repository to encode clinical decision making rules and queries. A semantic clinical decision support system is designed to provide patient-specific healthcare recommendations based on the knowledge base and patient data. The proposed solution is evaluated in the case study of type 2 diabetes mellitus inpatient management. The knowledge base is successfully instantiated with relevant domain knowledge and testing patient data. Ontology-level evaluation confirms model validity. Application-level evaluation of diagnostic accuracy reaches a sensitivity of 97.5%, a specificity of 100%, and a precision of 98%; an acceptance rate of 97.3% is given by domain experts for the recommended care plan orders. The proposed solution has been successfully validated in the case study as providing clinical decision support at a high accuracy and acceptance rate. The evaluation results demonstrate the technical feasibility and application prospect of our approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. An ontology for factors affecting tuberculosis treatment adherence behavior in sub-Saharan Africa.

    PubMed

    Ogundele, Olukunle Ayodeji; Moodley, Deshendran; Pillay, Anban W; Seebregts, Christopher J

    2016-01-01

    Adherence behavior is a complex phenomenon influenced by diverse personal, cultural, and socioeconomic factors that may vary between communities in different regions. Understanding the factors that influence adherence behavior is essential in predicting which individuals and communities are at risk of nonadherence. This is necessary for supporting resource allocation and intervention planning in disease control programs. Currently, there is no known concrete and unambiguous computational representation of factors that influence tuberculosis (TB) treatment adherence behavior that is useful for prediction. This study developed a computer-based conceptual model for capturing and structuring knowledge about the factors that influence TB treatment adherence behavior in sub-Saharan Africa (SSA). An extensive review of existing categorization systems in the literature was used to develop a conceptual model that captured scientific knowledge about TB adherence behavior in SSA. The model was formalized as an ontology using the web ontology language. The ontology was then evaluated for its comprehensiveness and applicability in building predictive models. The outcome of the study is a novel ontology-based approach for curating and structuring scientific knowledge of adherence behavior in patients with TB in SSA. The ontology takes an evidence-based approach by explicitly linking factors to published clinical studies. Factors are structured around five dimensions: factor type, type of effect, regional variation, cross-dependencies between factors, and treatment phase. The ontology is flexible and extendable and provides new insights into the nature of and interrelationship between factors that influence TB adherence.

  17. An ontology for factors affecting tuberculosis treatment adherence behavior in sub-Saharan Africa

    PubMed Central

    Ogundele, Olukunle Ayodeji; Moodley, Deshendran; Pillay, Anban W; Seebregts, Christopher J

    2016-01-01

    Purpose Adherence behavior is a complex phenomenon influenced by diverse personal, cultural, and socioeconomic factors that may vary between communities in different regions. Understanding the factors that influence adherence behavior is essential in predicting which individuals and communities are at risk of nonadherence. This is necessary for supporting resource allocation and intervention planning in disease control programs. Currently, there is no known concrete and unambiguous computational representation of factors that influence tuberculosis (TB) treatment adherence behavior that is useful for prediction. This study developed a computer-based conceptual model for capturing and structuring knowledge about the factors that influence TB treatment adherence behavior in sub-Saharan Africa (SSA). Methods An extensive review of existing categorization systems in the literature was used to develop a conceptual model that captured scientific knowledge about TB adherence behavior in SSA. The model was formalized as an ontology using the web ontology language. The ontology was then evaluated for its comprehensiveness and applicability in building predictive models. Conclusion The outcome of the study is a novel ontology-based approach for curating and structuring scientific knowledge of adherence behavior in patients with TB in SSA. The ontology takes an evidence-based approach by explicitly linking factors to published clinical studies. Factors are structured around five dimensions: factor type, type of effect, regional variation, cross-dependencies between factors, and treatment phase. The ontology is flexible and extendable and provides new insights into the nature of and interrelationship between factors that influence TB adherence. PMID:27175067

  18. a Representation-Driven Ontology for Spatial Data Quality Elements, with Orthoimagery as Running Example

    NASA Astrophysics Data System (ADS)

    Hangouët, J.-F.

    2015-08-01

    The many facets of what is encompassed by such an expression as "quality of spatial data" can be considered as a specific domain of reality worthy of formal description, i.e. of ontological abstraction. Various ontologies for data quality elements have already been proposed in literature. Today, the system of quality elements is most generally used and discussed according to the configuration exposed in the "data dictionary for data quality" of international standard ISO 19157. Our communication proposes an alternative view. This is founded on a perspective which focuses on the specificity of spatial data as a product: the representation perspective, where data in the computer are meant to show things of the geographic world and to be interpreted as such. The resulting ontology introduces new elements, the usefulness of which will be illustrated by orthoimagery examples.

  19. Facet Theory and the Mapping Sentence As Hermeneutically Consistent Structured Meta-Ontology and Structured Meta-Mereology

    PubMed Central

    Hackett, Paul M. W.

    2016-01-01

    When behavior is interpreted in a reliable manner (i.e., robustly across different situations and times) its explained meaning may be seen to possess hermeneutic consistency. In this essay I present an evaluation of the hermeneutic consistency that I propose may be present when the research tool known as the mapping sentence is used to create generic structural ontologies. I also claim that theoretical and empirical validity is a likely result of employing the mapping sentence in research design and interpretation. These claims are non-contentious within the realm of quantitative psychological and behavioral research. However, I extend the scope of both facet theory based research and claims for its structural utility, reliability and validity to philosophical and qualitative investigations. I assert that the hermeneutic consistency of a structural ontology is a product of a structural representation's ontological components and the mereological relationships between these ontological sub-units: the mapping sentence seminally allows for the depiction of such structure. PMID:27065932

  20. ontologyX: a suite of R packages for working with ontological data.

    PubMed

    Greene, Daniel; Richardson, Sylvia; Turro, Ernest

    2017-04-01

    Ontologies are widely used constructs for encoding and analyzing biomedical data, but the absence of simple and consistent tools has made exploratory and systematic analysis of such data unnecessarily difficult. Here we present three packages which aim to simplify such procedures. The ontologyIndex package enables arbitrary ontologies to be read into R, supports representation of ontological objects by native R types, and provides a parsimonius set of performant functions for querying ontologies. ontologySimilarity and ontologyPlot extend ontologyIndex with functionality for straightforward visualization and semantic similarity calculations, including statistical routines. ontologyIndex , ontologyPlot and ontologySimilarity are all available on the Comprehensive R Archive Network website under https://cran.r-project.org/web/packages/ . Daniel Greene dg333@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  1. Spatiotemporal integration of molecular and anatomical data in virtual reality using semantic mapping.

    PubMed

    Soh, Jung; Turinsky, Andrei L; Trinh, Quang M; Chang, Jasmine; Sabhaney, Ajay; Dong, Xiaoli; Gordon, Paul Mk; Janzen, Ryan Pw; Hau, David; Xia, Jianguo; Wishart, David S; Sensen, Christoph W

    2009-01-01

    We have developed a computational framework for spatiotemporal integration of molecular and anatomical datasets in a virtual reality environment. Using two case studies involving gene expression data and pharmacokinetic data, respectively, we demonstrate how existing knowledge bases for molecular data can be semantically mapped onto a standardized anatomical context of human body. Our data mapping methodology uses ontological representations of heterogeneous biomedical datasets and an ontology reasoner to create complex semantic descriptions of biomedical processes. This framework provides a means to systematically combine an increasing amount of biomedical imaging and numerical data into spatiotemporally coherent graphical representations. Our work enables medical researchers with different expertise to simulate complex phenomena visually and to develop insights through the use of shared data, thus paving the way for pathological inference, developmental pattern discovery and biomedical hypothesis testing.

  2. BioSWR – Semantic Web Services Registry for Bioinformatics

    PubMed Central

    Repchevsky, Dmitry; Gelpi, Josep Ll.

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118

  3. BioSWR--semantic web services registry for bioinformatics.

    PubMed

    Repchevsky, Dmitry; Gelpi, Josep Ll

    2014-01-01

    Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  4. Modeling and formal representation of geospatial knowledge for the Geospatial Semantic Web

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Gong, Jianya

    2008-12-01

    GML can only achieve geospatial interoperation at syntactic level. However, it is necessary to resolve difference of spatial cognition in the first place in most occasions, so ontology was introduced to describe geospatial information and services. But it is obviously difficult and improper to let users to find, match and compose services, especially in some occasions there are complicated business logics. Currently, with the gradual introduction of Semantic Web technology (e.g., OWL, SWRL), the focus of the interoperation of geospatial information has shifted from syntactic level to Semantic and even automatic, intelligent level. In this way, Geospatial Semantic Web (GSM) can be put forward as an augmentation to the Semantic Web that additionally includes geospatial abstractions as well as related reasoning, representation and query mechanisms. To advance the implementation of GSM, we first attempt to construct the mechanism of modeling and formal representation of geospatial knowledge, which are also two mostly foundational phases in knowledge engineering (KE). Our attitude in this paper is quite pragmatical: we argue that geospatial context is a formal model of the discriminate environment characters of geospatial knowledge, and the derivation, understanding and using of geospatial knowledge are located in geospatial context. Therefore, first, we put forward a primitive hierarchy of geospatial knowledge referencing first order logic, formal ontologies, rules and GML. Second, a metamodel of geospatial context is proposed and we use the modeling methods and representation languages of formal ontologies to process geospatial context. Thirdly, we extend Web Process Service (WPS) to be compatible with local DLL for geoprocessing and possess inference capability based on OWL.

  5. Ontology-guided data preparation for discovering genotype-phenotype relationships.

    PubMed

    Coulet, Adrien; Smaïl-Tabbone, Malika; Benlian, Pascale; Napoli, Amedeo; Devignes, Marie-Dominique

    2008-04-25

    Complexity and amount of post-genomic data constitute two major factors limiting the application of Knowledge Discovery in Databases (KDD) methods in life sciences. Bio-ontologies may nowadays play key roles in knowledge discovery in life science providing semantics to data and to extracted units, by taking advantage of the progress of Semantic Web technologies concerning the understanding and availability of tools for knowledge representation, extraction, and reasoning. This paper presents a method that exploits bio-ontologies for guiding data selection within the preparation step of the KDD process. We propose three scenarios in which domain knowledge and ontology elements such as subsumption, properties, class descriptions, are taken into account for data selection, before the data mining step. Each of these scenarios is illustrated within a case-study relative to the search of genotype-phenotype relationships in a familial hypercholesterolemia dataset. The guiding of data selection based on domain knowledge is analysed and shows a direct influence on the volume and significance of the data mining results. The method proposed in this paper is an efficient alternative to numerical methods for data selection based on domain knowledge. In turn, the results of this study may be reused in ontology modelling and data integration.

  6. Improving Interpretation of Cardiac Phenotypes and Enhancing Discovery With Expanded Knowledge in the Gene Ontology

    PubMed Central

    Roncaglia, Paola; Howe, Douglas G.; Laulederkind, Stanley J.F.; Khodiyar, Varsha K.; Berardini, Tanya Z.; Tweedie, Susan; Foulger, Rebecca E.; Osumi-Sutherland, David; Campbell, Nancy H.; Huntley, Rachael P.; Talmud, Philippa J.; Blake, Judith A.; Breckenridge, Ross; Riley, Paul R.; Lambiase, Pier D.; Elliott, Perry M.; Clapp, Lucie; Tinker, Andrew; Hill, David P.

    2018-01-01

    Background: A systems biology approach to cardiac physiology requires a comprehensive representation of how coordinated processes operate in the heart, as well as the ability to interpret relevant transcriptomic and proteomic experiments. The Gene Ontology (GO) Consortium provides structured, controlled vocabularies of biological terms that can be used to summarize and analyze functional knowledge for gene products. Methods and Results: In this study, we created a computational resource to facilitate genetic studies of cardiac physiology by integrating literature curation with attention to an improved and expanded ontological representation of heart processes in the Gene Ontology. As a result, the Gene Ontology now contains terms that comprehensively describe the roles of proteins in cardiac muscle cell action potential, electrical coupling, and the transmission of the electrical impulse from the sinoatrial node to the ventricles. Evaluating the effectiveness of this approach to inform data analysis demonstrated that Gene Ontology annotations, analyzed within an expanded ontological context of heart processes, can help to identify candidate genes associated with arrhythmic disease risk loci. Conclusions: We determined that a combination of curation and ontology development for heart-specific genes and processes supports the identification and downstream analysis of genes responsible for the spread of the cardiac action potential through the heart. Annotating these genes and processes in a structured format facilitates data analysis and supports effective retrieval of gene-centric information about cardiac defects. PMID:29440116

  7. Improving Interpretation of Cardiac Phenotypes and Enhancing Discovery With Expanded Knowledge in the Gene Ontology.

    PubMed

    Lovering, Ruth C; Roncaglia, Paola; Howe, Douglas G; Laulederkind, Stanley J F; Khodiyar, Varsha K; Berardini, Tanya Z; Tweedie, Susan; Foulger, Rebecca E; Osumi-Sutherland, David; Campbell, Nancy H; Huntley, Rachael P; Talmud, Philippa J; Blake, Judith A; Breckenridge, Ross; Riley, Paul R; Lambiase, Pier D; Elliott, Perry M; Clapp, Lucie; Tinker, Andrew; Hill, David P

    2018-02-01

    A systems biology approach to cardiac physiology requires a comprehensive representation of how coordinated processes operate in the heart, as well as the ability to interpret relevant transcriptomic and proteomic experiments. The Gene Ontology (GO) Consortium provides structured, controlled vocabularies of biological terms that can be used to summarize and analyze functional knowledge for gene products. In this study, we created a computational resource to facilitate genetic studies of cardiac physiology by integrating literature curation with attention to an improved and expanded ontological representation of heart processes in the Gene Ontology. As a result, the Gene Ontology now contains terms that comprehensively describe the roles of proteins in cardiac muscle cell action potential, electrical coupling, and the transmission of the electrical impulse from the sinoatrial node to the ventricles. Evaluating the effectiveness of this approach to inform data analysis demonstrated that Gene Ontology annotations, analyzed within an expanded ontological context of heart processes, can help to identify candidate genes associated with arrhythmic disease risk loci. We determined that a combination of curation and ontology development for heart-specific genes and processes supports the identification and downstream analysis of genes responsible for the spread of the cardiac action potential through the heart. Annotating these genes and processes in a structured format facilitates data analysis and supports effective retrieval of gene-centric information about cardiac defects. © 2018 The Authors.

  8. Ontology of physics for biology: representing physical dependencies as a basis for biological processes

    PubMed Central

    2013-01-01

    Background In prior work, we presented the Ontology of Physics for Biology (OPB) as a computational ontology for use in the annotation and representations of biophysical knowledge encoded in repositories of physics-based biosimulation models. We introduced OPB:Physical entity and OPB:Physical property classes that extend available spatiotemporal representations of physical entities and processes to explicitly represent the thermodynamics and dynamics of physiological processes. Our utilitarian, long-term aim is to develop computational tools for creating and querying formalized physiological knowledge for use by multiscale “physiome” projects such as the EU’s Virtual Physiological Human (VPH) and NIH’s Virtual Physiological Rat (VPR). Results Here we describe the OPB:Physical dependency taxonomy of classes that represent of the laws of classical physics that are the “rules” by which physical properties of physical entities change during occurrences of physical processes. For example, the fluid analog of Ohm’s law (as for electric currents) is used to describe how a blood flow rate depends on a blood pressure gradient. Hooke’s law (as in elastic deformations of springs) is used to describe how an increase in vascular volume increases blood pressure. We classify such dependencies according to the flow, transformation, and storage of thermodynamic energy that occurs during processes governed by the dependencies. Conclusions We have developed the OPB and annotation methods to represent the meaning—the biophysical semantics—of the mathematical statements of physiological analysis and the biophysical content of models and datasets. Here we describe and discuss our approach to an ontological representation of physical laws (as dependencies) and properties as encoded for the mathematical analysis of biophysical processes. PMID:24295137

  9. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  10. An empirical analysis of ontology reuse in BioPortal.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Arabandi, Sivaram; Tudorache, Tania; Musen, Mark A

    2017-07-01

    Biomedical ontologies often reuse content (i.e., classes and properties) from other ontologies. Content reuse enables a consistent representation of a domain and reusing content can save an ontology author significant time and effort. Prior studies have investigated the existence of reused terms among the ontologies in the NCBO BioPortal, but as of yet there has not been a study investigating how the ontologies in BioPortal utilize reused content in the modeling of their own content. In this study we investigate how 355 ontologies hosted in the NCBO BioPortal reuse content from other ontologies for the purposes of creating new ontology content. We identified 197 ontologies that reuse content. Among these ontologies, 108 utilize reused classes in the modeling of their own classes and 116 utilize reused properties in class restrictions. Current utilization of reuse and quality issues related to reuse are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. The Porifera Ontology (PORO): enhancing sponge systematics with an anatomy ontology.

    PubMed

    Thacker, Robert W; Díaz, Maria Cristina; Kerner, Adeline; Vignes-Lebbe, Régine; Segerdell, Erik; Haendel, Melissa A; Mungall, Christopher J

    2014-01-01

    Porifera (sponges) are ancient basal metazoans that lack organs. They provide insight into key evolutionary transitions, such as the emergence of multicellularity and the nervous system. In addition, their ability to synthesize unusual compounds offers potential biotechnical applications. However, much of the knowledge of these organisms has not previously been codified in a machine-readable way using modern web standards. The Porifera Ontology is intended as a standardized coding system for sponge anatomical features currently used in systematics. The ontology is available from http://purl.obolibrary.org/obo/poro.owl, or from the project homepage http://porifera-ontology.googlecode.com/. The version referred to in this manuscript is permanently available from http://purl.obolibrary.org/obo/poro/releases/2014-03-06/. By standardizing character representations, we hope to facilitate more rapid description and identification of sponge taxa, to allow integration with other evolutionary database systems, and to perform character mapping across the major clades of sponges to better understand the evolution of morphological features. Future applications of the ontology will focus on creating (1) ontology-based species descriptions; (2) taxonomic keys that use the nested terms of the ontology to more quickly facilitate species identifications; and (3) methods to map anatomical characters onto molecular phylogenies of sponges. In addition to modern taxa, the ontology is being extended to include features of fossil taxa.

  12. Text categorization of biomedical data sets using graph kernels and a controlled vocabulary.

    PubMed

    Bleik, Said; Mishra, Meenakshi; Huan, Jun; Song, Min

    2013-01-01

    Recently, graph representations of text have been showing improved performance over conventional bag-of-words representations in text categorization applications. In this paper, we present a graph-based representation for biomedical articles and use graph kernels to classify those articles into high-level categories. In our representation, common biomedical concepts and semantic relationships are identified with the help of an existing ontology and are used to build a rich graph structure that provides a consistent feature set and preserves additional semantic information that could improve a classifier's performance. We attempt to classify the graphs using both a set-based graph kernel that is capable of dealing with the disconnected nature of the graphs and a simple linear kernel. Finally, we report the results comparing the classification performance of the kernel classifiers to common text-based classifiers.

  13. OAE: The Ontology of Adverse Events.

    PubMed

    He, Yongqun; Sarntivijai, Sirarat; Lin, Yu; Xiang, Zuoshuang; Guo, Abra; Zhang, Shelley; Jagannathan, Desikan; Toldo, Luca; Tao, Cui; Smith, Barry

    2014-01-01

    A medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health. The Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term 'adverse event' denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA. OAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of adverse events and of the factors (e.g., vaccinee age) important for determining their clinical outcomes.

  14. A semantically-aided architecture for a web-based monitoring system for carotid atherosclerosis.

    PubMed

    Kolias, Vassileios D; Stamou, Giorgos; Golemati, Spyretta; Stoitsis, Giannis; Gkekas, Christos D; Liapis, Christos D; Nikita, Konstantina S

    2015-08-01

    Carotid atherosclerosis is a multifactorial disease and its clinical diagnosis depends on the evaluation of heterogeneous clinical data, such as imaging exams, biochemical tests and the patient's clinical history. The lack of interoperability between Health Information Systems (HIS) does not allow the physicians to acquire all the necessary data for the diagnostic process. In this paper, a semantically-aided architecture is proposed for a web-based monitoring system for carotid atherosclerosis that is able to gather and unify heterogeneous data with the use of an ontology and to create a common interface for data access enhancing the interoperability of HIS. The architecture is based on an application ontology of carotid atherosclerosis that is used to (a) integrate heterogeneous data sources on the basis of semantic representation and ontological reasoning and (b) access the critical information using SPARQL query rewriting and ontology-based data access services. The architecture was tested over a carotid atherosclerosis dataset consisting of the imaging exams and the clinical profile of 233 patients, using a set of complex queries, constructed by the physicians. The proposed architecture was evaluated with respect to the complexity of the queries that the physicians could make and the retrieval speed. The proposed architecture gave promising results in terms of interoperability, data integration of heterogeneous sources with an ontological way and expanded capabilities of query and retrieval in HIS.

  15. ATOS-1: Designing the infrastructure for an advanced spacecraft operations system

    NASA Technical Reports Server (NTRS)

    Poulter, K. J.; Smith, H. N.

    1993-01-01

    The space industry has identified the need to use artificial intelligence and knowledge based system techniques as integrated, central, symbolic processing components of future mission design, support and operations systems. Various practical and commercial constraints require that off-the-shelf applications, and their knowledge bases, are reused where appropriate and that different mission contractors, potentially using different KBS technologies, can provide application and knowledge sub-modules of an overall integrated system. In order to achieve this integration, which we call knowledge sharing and distributed reasoning, there needs to be agreement on knowledge representations, knowledge interchange-formats, knowledge level communications protocols, and ontology. Research indicates that the latter is most important, providing the applications with a common conceptualization of the domain, in our case spacecraft operations, mission design, and planning. Agreement on ontology permits applications that employ different knowledge representations to interwork through mediators which we refer to as knowledge agents. This creates the illusion of a shared model without the constraints, both technical and commercial, that occur in centralized or uniform architectures. This paper explains how these matters are being addressed within the ATOS program at ESOC, using techniques which draw upon ideas and standards emerging from the DARPA Knowledge Sharing Effort. In particular, we explain how the project is developing an electronic Ontology of Spacecraft Operations and how this can be used as an enabling component within space support systems that employ advanced software engineering. We indicate our hope and expectation that the core ontology developed in ATOS, will permit the full development of standards for such systems throughout the space industry.

  16. Model Based Verification of Cyber Range Event Environments

    DTIC Science & Technology

    2015-12-10

    Model Based Verification of Cyber Range Event Environments Suresh K. Damodaran MIT Lincoln Laboratory 244 Wood St., Lexington, MA, USA...apply model based verification to cyber range event environment configurations, allowing for the early detection of errors in event environment...Environment Representation (CCER) ontology. We also provide an overview of a methodology to specify verification rules and the corresponding error

  17. A knowledge representation view on biomedical structure and function.

    PubMed Central

    Schulz, Stefan; Hahn, Udo

    2002-01-01

    In biomedical ontologies, structural and functional considerations are of outstanding importance, and concepts which belong to these two categories are highly interdependent. At the representational level both axes must be clearly kept separate in order to support disciplined ontology engineering. Furthermore, the biaxial organization of physical structure (both by a taxonomic and partonomic order) entails intricate patterns of inference. We here propose a layered encoding of taxonomic, partonomic and functional aspects of biomedical concepts using description logics. PMID:12463912

  18. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data.

    PubMed

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-05-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.

  19. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data

    PubMed Central

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-01-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data. PMID:23268487

  20. Constructing a Geology Ontology Using a Relational Database

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).

  1. Automated Database Mediation Using Ontological Metadata Mappings

    PubMed Central

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801

  2. Information and organization in public health institutes: an ontology-based modeling of the entities in the reception-analysis-report phases.

    PubMed

    Pozza, Giandomenico; Borgo, Stefano; Oltramari, Alessandro; Contalbrigo, Laura; Marangon, Stefano

    2016-09-08

    Ontologies are widely used both in the life sciences and in the management of public and private companies. Typically, the different offices in an organization develop their own models and related ontologies to capture specific tasks and goals. Although there might be an overall coordination, the use of distinct ontologies can jeopardize the integration of data across the organization since data sharing and reusability are sensitive to modeling choices. The paper provides a study of the entities that are typically found at the reception, analysis and report phases in public institutes in the life science domain. Ontological considerations and techniques are introduced and their implementation exemplified by studying the Istituto Zooprofilattico Sperimentale delle Venezie (IZSVe), a public veterinarian institute with different geographical locations and several laboratories. Different modeling issues are discussed like the identification and characterization of the main entities in these phases; the classification of the (types of) data; the clarification of the contexts and the roles of the involved entities. The study is based on a foundational ontology and shows how it can be extended to a comprehensive and coherent framework comprising the different institute's roles, processes and data. In particular, it shows how to use notions lying at the borderline between ontology and applications, like that of knowledge object. The paper aims to help the modeler to understand the core viewpoint of the organization and to improve data transparency. The study shows that the entities at play can be analyzed within a single ontological perspective allowing us to isolate a single ontological framework for the whole organization. This facilitates the development of coherent representations of the entities and related data, and fosters the use of integrated software for data management and reasoning across the company.

  3. tOWL: a temporal Web Ontology Language.

    PubMed

    Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    2012-02-01

    Through its interoperability and reasoning capabilities, the Semantic Web opens a realm of possibilities for developing intelligent systems on the Web. The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies, the cornerstone of the Semantic Web. However, up until now, no standard way of expressing time and time-dependent information in OWL has been provided. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL Description Logic language, resulting in the temporal OWL language. Through a layered approach, we introduce three extensions: 1) concrete domains, which allow the representation of restrictions using concrete domain binary predicates; 2) temporal representation , which introduces time points, relations between time points, intervals, and Allen's 13 interval relations into the language; and 3) timeslices/fluents, which implement a perdurantist view on individuals and allow for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by using an example from the financial domain.

  4. TrhOnt: building an ontology to assist rehabilitation processes.

    PubMed

    Berges, Idoia; Antón, David; Bermúdez, Jesús; Goñi, Alfredo; Illarramendi, Arantza

    2016-10-04

    One of the current research efforts in the area of biomedicine is the representation of knowledge in a structured way so that reasoning can be performed on it. More precisely, in the field of physiotherapy, information such as the physiotherapy record of a patient or treatment protocols for specific disorders must be adequately modeled, because they play a relevant role in the management of the evolutionary recovery process of a patient. In this scenario, we introduce TRHONT, an application ontology that can assist physiotherapists in the management of the patients' evolution via reasoning supported by semantic technology. The ontology was developed following the NeOn Methodology. It integrates knowledge from ontological (e.g. FMA ontology) and non-ontological resources (e.g. a database of movements, exercises and treatment protocols) as well as additional physiotherapy-related knowledge. We demonstrate how the ontology fulfills the purpose of providing a reference model for the representation of the physiotherapy-related information that is needed for the whole physiotherapy treatment of patients, since they step for the first time into the physiotherapist's office, until they are discharged. More specifically, we present the results for each of the intended uses of the ontology listed in the document that specifies its requirements, and show how TRHONT can answer the competency questions defined within that document. Moreover, we detail the main steps of the process followed to build the TRHONT ontology in order to facilitate its reproducibility in a similar context. Finally, we show an evaluation of the ontology from different perspectives. TRHONT has achieved the purpose of allowing for a reasoning process that changes over time according to the patient's state and performance.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarocki, John Charles; Zage, David John; Fisher, Andrew N.

    LinkShop is a software tool for applying the method of Linkography to the analysis time-sequence data. LinkShop provides command line, web, and application programming interfaces (API) for input and processing of time-sequence data, abstraction models, and ontologies. The software creates graph representations of the abstraction model, ontology, and derived linkograph. Finally, the tool allows the user to perform statistical measurements of the linkograph and refine the ontology through direct manipulation of the linkograph.

  6. ADO: a disease ontology representing the domain knowledge specific to Alzheimer's disease.

    PubMed

    Malhotra, Ashutosh; Younesi, Erfan; Gündel, Michaela; Müller, Bernd; Heneka, Michael T; Hofmann-Apitius, Martin

    2014-03-01

    Biomedical ontologies offer the capability to structure and represent domain-specific knowledge semantically. Disease-specific ontologies can facilitate knowledge exchange across multiple disciplines, and ontology-driven mining approaches can generate great value for modeling disease mechanisms. However, in the case of neurodegenerative diseases such as Alzheimer's disease, there is a lack of formal representation of the relevant knowledge domain. Alzheimer's disease ontology (ADO) is constructed in accordance to the ontology building life cycle. The Protégé OWL editor was used as a tool for building ADO in Ontology Web Language format. ADO was developed with the purpose of containing information relevant to four main biological views-preclinical, clinical, etiological, and molecular/cellular mechanisms-and was enriched by adding synonyms and references. Validation of the lexicalized ontology by means of named entity recognition-based methods showed a satisfactory performance (F score = 72%). In addition to structural and functional evaluation, a clinical expert in the field performed a manual evaluation and curation of ADO. Through integration of ADO into an information retrieval environment, we show that the ontology supports semantic search in scientific text. The usefulness of ADO is authenticated by dedicated use case scenarios. Development of ADO as an open ADO is a first attempt to organize information related to Alzheimer's disease in a formalized, structured manner. We demonstrate that ADO is able to capture both established and scattered knowledge existing in scientific text. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  7. VuWiki: An Ontology-Based Semantic Wiki for Vulnerability Assessments

    NASA Astrophysics Data System (ADS)

    Khazai, Bijan; Kunz-Plapp, Tina; Büscher, Christian; Wegner, Antje

    2014-05-01

    The concept of vulnerability, as well as its implementation in vulnerability assessments, is used in various disciplines and contexts ranging from disaster management and reduction to ecology, public health or climate change and adaptation, and a corresponding multitude of ideas about how to conceptualize and measure vulnerability exists. Three decades of research in vulnerability have generated a complex and growing body of knowledge that challenges newcomers, practitioners and even experienced researchers. To provide a structured representation of the knowledge field "vulnerability assessment", we have set up an ontology-based semantic wiki for reviewing and representing vulnerability assessments: VuWiki, www.vuwiki.org. Based on a survey of 55 vulnerability assessment studies, we first developed an ontology as an explicit reference system for describing vulnerability assessments. We developed the ontology in a theoretically controlled manner based on general systems theory and guided by principles for ontology development in the field of earth and environment (Raskin and Pan 2005). Four key questions form the first level "branches" or categories of the developed ontology: (1) Vulnerability of what? (2) Vulnerability to what? (3) What reference framework was used in the vulnerability assessment?, and (4) What methodological approach was used in the vulnerability assessment? These questions correspond to the basic, abstract structure of the knowledge domain of vulnerability assessments and have been deduced from theories and concepts of various disciplines. The ontology was then implemented in a semantic wiki which allows for the classification and annotation of vulnerability assessments. As a semantic wiki, VuWiki does not aim at "synthesizing" a holistic and overarching model of vulnerability. Instead, it provides both scientists and practitioners with a uniform ontology as a reference system and offers easy and structured access to the knowledge field of vulnerability assessments with the possibility for any user to retrieve assessments using specific research criteria. Furthermore, Vuwiki can serve as a collaborative knowledge platform that allows for the active participation of those generating and using the knowledge represented in the wiki.

  8. A methodology to migrate the gene ontology to a description logic environment using DAML+OIL.

    PubMed

    Wroe, C J; Stevens, R; Goble, C A; Ashburner, M

    2003-01-01

    The Gene Ontology Next Generation Project (GONG) is developing a staged methodology to evolve the current representation of the Gene Ontology into DAML+OIL in order to take advantage of the richer formal expressiveness and the reasoning capabilities of the underlying description logic. Each stage provides a step level increase in formal explicit semantic content with a view to supporting validation, extension and multiple classification of the Gene Ontology. The paper introduces DAML+OIL and demonstrates the activity within each stage of the methodology and the functionality gained.

  9. AIM: a personal view of where I have been and where we might be going.

    PubMed

    Rector, A

    2001-08-01

    My own career in medical informatics and AI in medicine has oscillated between concerns with medical records and concerns with knowledge representation with decision support as a pivotal integrating issue. It has focused on using AI to organise information and reduce 'muddle' and improve the user interfaces to produce 'useful and usable systems' to help doctors with a 'humanly impossible task'. Increasingly knowledge representation and ontologies have become the fulcrum for orchestrating re-use of information and integration of systems. Encouragingly, the dilemma between computational tractability and expressiveness is lessening, and ontologies and description logics are joining the mainstream both in AI in Medicine and in Intelligent Information Management generally. It has been shown possible to scale up ontologies to meet medical needs, and increasingly ontologies are playing a key role in meeting the requirements to scale up the complexity of clinical systems to meet the ever increasing demands brought about by new emphasis on reduction of errors, clinical accountability, and the explosion of knowledge on the Web.

  10. New Actions Upon Old Objects: A New Ontological Perspective on Functions.

    ERIC Educational Resources Information Center

    Schwarz, Baruch; Dreyfus, Tommy

    1995-01-01

    A computer microworld called Triple Representation Model uses graphical, tabular, and algebraic representations to influence conceptions of function. A majority of students were able to cope with partial data, recognize invariants while coordinating actions among representations, and recognize invariants while creating and comparing different…

  11. Efficient Results in Semantic Interoperability for Health Care. Findings from the Section on Knowledge Representation and Management.

    PubMed

    Soualmia, L F; Charlet, J

    2016-11-10

    To summarize excellent current research in the field of Knowledge Representation and Management (KRM) within the health and medical care domain. We provide a synopsis of the 2016 IMIA selected articles as well as a related synthetic overview of the current and future field activities. A first step of the selection was performed through MEDLINE querying with a list of MeSH descriptors completed by a list of terms adapted to the KRM section. The second step of the selection was completed by the two section editors who separately evaluated the set of 1,432 articles. The third step of the selection consisted of a collective work that merged the evaluation results to retain 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Knowledge Representation and Management has yielded four excellent and interesting articles regarding semantic interoperability for health care by gathering heterogeneous sources (knowledge and data) and auditing ontologies. In the first article, the authors present a solution based on standards and Semantic Web technologies to access distributed and heterogeneous datasets in the domain of breast cancer clinical trials. The second article describes a knowledge-based recommendation system that relies on ontologies and Semantic Web rules in the context of chronic diseases dietary. The third article is related to concept-recognition and text-mining to derive common human diseases model and a phenotypic network of common diseases. In the fourth article, the authors highlight the need for auditing the SNOMED CT. They propose to use a crowdbased method for ontology engineering. The current research activities further illustrate the continuous convergence of Knowledge Representation and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care by proposing solutions to cope with the problem of semantic interoperability. Indeed, there is a need for powerful tools able to manage and interpret complex, large-scale and distributed datasets and knowledge bases, but also a need for user-friendly tools developed for the clinicians in their daily practice.

  12. TrOn: an anatomical ontology for the beetle Tribolium castaneum.

    PubMed

    Dönitz, Jürgen; Grossmann, Daniela; Schild, Inga; Schmitt-Engel, Christian; Bradler, Sven; Prpic, Nikola-Michael; Bucher, Gregor

    2013-01-01

    In a morphological ontology the expert's knowledge is represented in terms, which describe morphological structures and how these structures relate to each other. With the assistance of ontologies this expert knowledge is made processable by machines, through a formal and standardized representation of terms and their relations to each other. The red flour beetle Tribolium castaneum, a representative of the most species rich animal taxon on earth (the Coleoptera), is an emerging model organism for development, evolution, physiology, and pest control. In order to foster Tribolium research, we have initiated the Tribolium Ontology (TrOn), which describes the morphology of the red flour beetle. The content of this ontology comprises so far most external morphological structures as well as some internal ones. All modeled structures are consistently annotated for the developmental stages larva, pupa and adult. In TrOn all terms are grouped into three categories: Generic terms represent morphological structures, which are independent of a developmental stage. In contrast, downstream of such terms are concrete terms which stand for a dissectible structure of a beetle at a specific life stage. Finally, there are mixed terms describing structures that are only found at one developmental stage. These terms combine the characteristics of generic and concrete terms with features of both. These annotation principles take into account the changing morphology of the beetle during development and provide generic terms to be used in applications or for cross linking with other ontologies and data resources. We use the ontology for implementing an intuitive search function at the electronic iBeetle-Base, which stores morphological defects found in a genome wide RNA interference (RNAi) screen. The ontology is available for download at http://ibeetle-base.uni-goettingen.de.

  13. Ontology for E-Learning: A Case Study

    ERIC Educational Resources Information Center

    Colace, Francesco; De Santo, Massimo; Gaeta, Matteo

    2009-01-01

    Purpose: The development of adaptable and intelligent educational systems is widely considered one of the great challenges in scientific research. Among key elements for building advanced training systems, an important role is played by methodologies chosen for knowledge representation. In this scenario, the introduction of ontology formalism can…

  14. Using ontologies to integrate and share resuscitation data from diverse medical devices.

    PubMed

    Thorsen, Kari Anne Haaland; Eftestøl, Trygve; Tøssebro, Erlend; Rong, Chunming; Steen, Petter Andreas

    2009-05-01

    To propose a method for standardised data representation and demonstrate a technology that makes it possible to translate data from device dependent formats to this standard representation format. Outcome statistics vary between emergency medical systems organising resuscitation services. Such differences indicate a potential for improvement by identifying factors affecting outcome, but data subject to analysis have to be comparable. Modern technology for communicating information makes it possible to structure, store and transfer data flexibly. Ontologies describe entities in the world and how they relate. Letting different computer systems refer to the same ontology results in a common understanding on data content. Information on therapy such as shock delivery, chest compressions and ventilation should be defined and described in a standardised ontology to enable comparison and combining data from diverse sources. By adding rules and logic data can be merged and combined in new ways to produce new information. An example ontology is designed to demonstrate the feasibility and value of such a standardised structure. The proposed technology makes possible capturing and storing of data from different devices in a structured and standardised format. Data can easily be transformed to this standardised format, compared and combined independent of the original structure.

  15. An Ontology for Identifying Cyber Intrusion Induced Faults in Process Control Systems

    NASA Astrophysics Data System (ADS)

    Hieb, Jeffrey; Graham, James; Guan, Jian

    This paper presents an ontological framework that permits formal representations of process control systems, including elements of the process being controlled and the control system itself. A fault diagnosis algorithm based on the ontological model is also presented. The algorithm can identify traditional process elements as well as control system elements (e.g., IP network and SCADA protocol) as fault sources. When these elements are identified as a likely fault source, the possibility exists that the process fault is induced by a cyber intrusion. A laboratory-scale distillation column is used to illustrate the model and the algorithm. Coupled with a well-defined statistical process model, this fault diagnosis approach provides cyber security enhanced fault diagnosis information to plant operators and can help identify that a cyber attack is underway before a major process failure is experienced.

  16. The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration

    PubMed Central

    Smith, Barry; Ashburner, Michael; Rosse, Cornelius; Bard, Jonathan; Bug, William; Ceusters, Werner; Goldberg, Louis J; Eilbeck, Karen; Ireland, Amelia; Mungall, Christopher J; Leontis, Neocles; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Shah, Nigam; Whetzel, Patricia L; Lewis, Suzanna

    2010-01-01

    The value of any kind of data is greatly enhanced when it exists in a form that allows it to be integrated with other data. One approach to integration is through the annotation of multiple bodies of data using common controlled vocabularies or ‘ontologies’. Unfortunately, the very success of this approach has led to a proliferation of ontologies, which itself creates obstacles to integration. The Open Biomedical Ontologies (OBO) consortium is pursuing a strategy to overcome this problem. Existing OBO ontologies, including the Gene Ontology, are undergoing coordinated reform, and new ontologies are being created on the basis of an evolving set of shared principles governing ontology development. The result is an expanding family of ontologies designed to be interoperable and logically well formed and to incorporate accurate representations of biological reality. We describe this OBO Foundry initiative and provide guidelines for those who might wish to become involved. PMID:17989687

  17. Special Educational Needs and Art and Design Education: Plural Perspectives on Exclusion

    ERIC Educational Resources Information Center

    Penketh, Claire

    2016-01-01

    Education policy proposals by the UK Coalition government appeared to be based on a process of consultation, participation and representation. However, policy formation seems to prioritise and confirm particular ways of knowing and being in the world. This article recognises the ontological and epistemological invalidation at work in education…

  18. Comprehensive Analysis of Semantic Web Reasoners and Tools: A Survey

    ERIC Educational Resources Information Center

    Khamparia, Aditya; Pandey, Babita

    2017-01-01

    Ontologies are emerging as best representation techniques for knowledge based context domains. The continuing need for interoperation, collaboration and effective information retrieval has lead to the creation of semantic web with the help of tools and reasoners which manages personalized information. The future of semantic web lies in an ontology…

  19. Geo-Ontologies Are Scale Dependent

    NASA Astrophysics Data System (ADS)

    Frank, A. U.

    2009-04-01

    Philosophers aim at a single ontology that describes "how the world is"; for information systems we aim only at ontologies that describe a conceptualization of reality (Guarino 1995; Gruber 2005). A conceptualization of the world implies a spatial and temporal scale: what are the phenomena, the objects and the speed of their change? Few articles (Reitsma et al. 2003) seem to address that an ontology is scale specific (but many articles indicate that ontologies are scale-free in another sense namely that they are scale free in the link densities between concepts). The scale in the conceptualization can be linked to the observation process. The extent of the support of the physical observation instrument and the sampling theorem indicate what level of detail we find in a dataset. These rules apply for remote sensing or sensor networks alike. An ontology of observations must include scale or level of detail, and concepts derived from observations should carry this relation forward. A simple example: in high resolution remote sensing image agricultural plots and roads between them are shown, at lower resolution, only the plots and not the roads are visible. This gives two ontologies, one with plots and roads, the other with plots only. Note that a neighborhood relation in the two different ontologies also yield different results. References Gruber, T. (2005). "TagOntology - a way to agree on the semantics of tagging data." Retrieved October 29, 2005., from http://tomgruber.org/writing/tagontology-tagcapm-talk.pdf. Guarino, N. (1995). "Formal Ontology, Conceptual Analysis and Knowledge Representation." International Journal of Human and Computer Studies. Special Issue on Formal Ontology, Conceptual Analysis and Knowledge Representation, edited by N. Guarino and R. Poli 43(5/6). Reitsma, F. and T. Bittner (2003). Process, Hierarchy, and Scale. Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information ScienceInternational Conference COSIT'03.

  20. Ontology and medical diagnosis.

    PubMed

    Bertaud-Gounot, Valérie; Duvauferrier, Régis; Burgun, Anita

    2012-03-01

    Ontology and associated generic tools are appropriate for knowledge modeling and reasoning, but most of the time, disease definitions in existing description logic (DL) ontology are not sufficient to classify patient's characteristics under a particular disease because they do not formalize operational definitions of diseases (association of signs and symptoms=diagnostic criteria). The main objective of this study is to propose an ontological representation which takes into account the diagnostic criteria on which specific patient conditions may be classified under a specific disease. This method needs as a prerequisite a clear list of necessary and sufficient diagnostic criteria as defined for lots of diseases by learned societies. It does not include probability/uncertainty which Web Ontology Language (OWL 2.0) cannot handle. We illustrate it with spondyloarthritis (SpA). Ontology has been designed in Protégé 4.1 OWL-DL2.0. Several kinds of criteria were formalized: (1) mandatory criteria, (2) picking two criteria among several diagnostic criteria, (3) numeric criteria. Thirty real patient cases were successfully classified with the reasoner. This study shows that it is possible to represent operational definitions of diseases with OWL and successfully classify real patient cases. Representing diagnostic criteria as descriptive knowledge (instead of rules in Semantic Web Rule Language or Prolog) allows us to take advantage of tools already available for OWL. While we focused on Assessment of SpondyloArthritis international Society SpA criteria, we believe that many of the representation issues addressed here are relevant to using OWL-DL for operational definition of other diseases in ontology.

  1. Integrating systems biology models and biomedical ontologies

    PubMed Central

    2011-01-01

    Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms. PMID:21835028

  2. Representing Energy. I. Representing a Substance Ontology for Energy

    ERIC Educational Resources Information Center

    Scherr, Rachel E.; Close, Hunter G.; McKagan, Sarah B.; Vokos, Stamatis

    2012-01-01

    The nature of energy is not typically an explicit topic of physics instruction. Nonetheless, verbal and graphical representations of energy articulate models in which energy is conceptualized as a quasimaterial substance, a stimulus, or a vertical location. We argue that a substance ontology for energy is particularly productive in developing…

  3. A web ontology for brain trauma patient computer-assisted rehabilitation.

    PubMed

    Zikos, Dimitrios; Galatas, George; Metsis, Vangelis; Makedon, Fillia

    2013-01-01

    In this paper we describe CABROnto, which is a web ontology for the semantic representation of the computer assisted brain trauma rehabilitation. This is a novel and emerging domain, since it employs the use of robotic devices, adaptation software and machine learning to facilitate interactive and adaptive rehabilitation care. We used Protégé 4.2 and Protégé-Owl schema editor. The primary goal of this ontology is to enable the reuse of the domain knowledge. CABROnto has nine main classes, more than 50 subclasses, existential and cardinality restrictions. The ontology can be found online at Bioportal.

  4. Cell type discovery using single-cell transcriptomics: implications for ontological representation.

    PubMed

    Aevermann, Brian D; Novotny, Mark; Bakken, Trygve; Miller, Jeremy A; Diehl, Alexander D; Osumi-Sutherland, David; Lasken, Roger S; Lein, Ed S; Scheuermann, Richard H

    2018-05-01

    Cells are fundamental function units of multicellular organisms, with different cell types playing distinct physiological roles in the body. The recent advent of single-cell transcriptional profiling using RNA sequencing is producing 'big data', enabling the identification of novel human cell types at an unprecedented rate. In this review, we summarize recent work characterizing cell types in the human central nervous and immune systems using single-cell and single-nuclei RNA sequencing, and discuss the implications that these discoveries are having on the representation of cell types in the reference Cell Ontology (CL). We propose a method, based on random forest machine learning, for identifying sets of necessary and sufficient marker genes, which can be used to assemble consistent and reproducible cell type definitions for incorporation into the CL. The representation of defined cell type classes and their relationships in the CL using this strategy will make the cell type classes being identified by high-throughput/high-content technologies findable, accessible, interoperable and reusable (FAIR), allowing the CL to serve as a reference knowledgebase of information about the role that distinct cellular phenotypes play in human health and disease.

  5. War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?

    PubMed Central

    Rzhetsky, Andrey; Evans, James A.

    2011-01-01

    The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276

  6. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    PubMed Central

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  7. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies.

    PubMed

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A

    2016-08-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving "live partial-area taxonomies" is demonstrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The Choice between MapMan and Gene Ontology for Automated Gene Function Prediction in Plant Science

    PubMed Central

    Klie, Sebastian; Nikoloski, Zoran

    2012-01-01

    Since the introduction of the Gene Ontology (GO), the analysis of high-throughput data has become tightly coupled with the use of ontologies to establish associations between knowledge and data in an automated fashion. Ontologies provide a systematic description of knowledge by a controlled vocabulary of defined structure in which ontological concepts are connected by pre-defined relationships. In plant science, MapMan and GO offer two alternatives for ontology-driven analyses. Unlike GO, initially developed to characterize microbial systems, MapMan was specifically designed to cover plant-specific pathways and processes. While the dependencies between concepts in MapMan are modeled as a tree, in GO these are captured in a directed acyclic graph. Therefore, the difference in ontologies may cause discrepancies in data reduction, visualization, and hypothesis generation. Here provide the first systematic comparative analysis of GO and MapMan for the case of the model plant species Arabidopsis thaliana (Arabidopsis) with respect to their structural properties and difference in distributions of information content. In addition, we investigate the effect of the two ontologies on the specificity and sensitivity of automated gene function prediction via the coupling of co-expression networks and the guilt-by-association principle. Automated gene function prediction is particularly needed for the model plant Arabidopsis in which only half of genes have been functionally annotated based on sequence similarity to known genes. The results highlight the need for structured representation of species-specific biological knowledge, and warrants caution in the design principles employed in future ontologies. PMID:22754563

  9. [Representation of knowledge in respiratory medicine: ontology should help the coding process].

    PubMed

    Blanc, F-X; Baneyx, A; Charlet, J; Housset, B

    2010-09-01

    Access to medical knowledge is a major issue for health professionals and requires the development of terminologies. The objective of the reported work was to construct an ontology of respiratory medicine, i.e. an organized and formalized terminology composed by specific knowledge. The purpose is to help the medico-economical coding process and to represent the relevant knowledge about the patient. Our researches cover the whole life cycle of an ontology, from the development of a methodology, to building it from texts, to its use in an operational system. A computerized tool, based on the ontology, allows both a medico-economical coding and a graphical medical one. This second one will be used to index hospital reports. Our ontology counts 1913 concepts and contains all the knowledge included in the PMSI part of the SPLF thesaurus. Our tool has been evaluated and showed a recall of 80% and an accuracy of 85% regarding the medico-economical coding. The work presented in this paper justifies the approach that has been used. It must be continued on a large scale to validate our coding principles and the possibility of making enquiries on patient reports concerning clinical research. Copyright © 2010. Published by Elsevier Masson SAS.

  10. Tissue enrichment analysis for C. elegans genomics.

    PubMed

    Angeles-Albores, David; N Lee, Raymond Y; Chan, Juancarlos; Sternberg, Paul W

    2016-09-13

    Over the last ten years, there has been explosive development in methods for measuring gene expression. These methods can identify thousands of genes altered between conditions, but understanding these datasets and forming hypotheses based on them remains challenging. One way to analyze these datasets is to associate ontologies (hierarchical, descriptive vocabularies with controlled relations between terms) with genes and to look for enrichment of specific terms. Although Gene Ontology (GO) is available for Caenorhabditis elegans, it does not include anatomical information. We have developed a tool for identifying enrichment of C. elegans tissues among gene sets and generated a website GUI where users can access this tool. Since a common drawback to ontology enrichment analyses is its verbosity, we developed a very simple filtering algorithm to reduce the ontology size by an order of magnitude. We adjusted these filters and validated our tool using a set of 30 gold standards from Expression Cluster data in WormBase. We show our tool can even discriminate between embryonic and larval tissues and can even identify tissues down to the single-cell level. We used our tool to identify multiple neuronal tissues that are down-regulated due to pathogen infection in C. elegans. Our Tissue Enrichment Analysis (TEA) can be found within WormBase, and can be downloaded using Python's standard pip installer. It tests a slimmed-down C. elegans tissue ontology for enrichment of specific terms and provides users with a text and graphic representation of the results.

  11. Developing a semantically rich ontology for the biobank-administration domain

    PubMed Central

    2013-01-01

    Background Biobanks are a critical resource for translational science. Recently, semantic web technologies such as ontologies have been found useful in retrieving research data from biobanks. However, recent research has also shown that there is a lack of data about the administrative aspects of biobanks. These data would be helpful to answer research-relevant questions such as what is the scope of specimens collected in a biobank, what is the curation status of the specimens, and what is the contact information for curators of biobanks. Our use cases include giving researchers the ability to retrieve key administrative data (e.g. contact information, contact's affiliation, etc.) about the biobanks where specific specimens of interest are stored. Thus, our goal is to provide an ontology that represents the administrative entities in biobanking and their relations. We base our ontology development on a set of 53 data attributes called MIABIS, which were in part the result of semantic integration efforts of the European Biobanking and Biomolecular Resources Research Infrastructure (BBMRI). The previous work on MIABIS provided the domain analysis for our ontology. We report on a test of our ontology against competency questions that we derived from the initial BBMRI use cases. Future work includes additional ontology development to answer additional competency questions from these use cases. Results We created an open-source ontology of biobank administration called Ontologized MIABIS (OMIABIS) coded in OWL 2.0 and developed according to the principles of the OBO Foundry. It re-uses pre-existing ontologies when possible in cooperation with developers of other ontologies in related domains, such as the Ontology of Biomedical Investigation. OMIABIS provides a formalized representation of biobanks and their administration. Using the ontology and a set of Description Logic queries derived from the competency questions that we identified, we were able to retrieve test data with perfect accuracy. In addition, we began development of a mapping from the ontology to pre-existing biobank data structures commonly used in the U.S. Conclusions In conclusion, we created OMIABIS, an ontology of biobank administration. We found that basing its development on pre-existing resources to meet the BBMRI use cases resulted in a biobanking ontology that is re-useable in environments other than BBMRI. Our ontology retrieved all true positives and no false positives when queried according to the competency questions we derived from the BBMRI use cases. Mapping OMIABIS to a data structure used for biospecimen collections in a medical center in Little Rock, AR showed adequate coverage of our ontology. PMID:24103726

  12. Landscape features, standards, and semantics in U.S. national topographic mapping databases

    USGS Publications Warehouse

    Varanka, Dalia

    2009-01-01

    The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.

  13. Towards a Context-Aware Proactive Decision Support Framework

    DTIC Science & Technology

    2013-11-15

    initiative that has developed text analytic technology that crosses the semantic gap into the area of event recognition and representation. The...recognizing operational context, and techniques for recognizing context shift. Additional research areas include: • Adequately capturing users...Universal Interaction Context Ontology [12] might serve as a foundation • Instantiating formal models of decision making based on information seeking

  14. Ontology-Based Data Integration of Open Source Electronic Medical Record and Data Capture Systems

    ERIC Educational Resources Information Center

    Guidry, Alicia F.

    2013-01-01

    In low-resource settings, the prioritization of clinical care funding is often determined by immediate health priorities. As a result, investment directed towards the development of standards for clinical data representation and exchange are rare and accordingly, data management systems are often redundant. Open-source systems such as OpenMRS and…

  15. Evolutionary characters, phenotypes and ontologies: curating data from the systematic biology literature.

    PubMed

    Dahdul, Wasila M; Balhoff, James P; Engeman, Jeffrey; Grande, Terry; Hilton, Eric J; Kothari, Cartik; Lapp, Hilmar; Lundberg, John G; Midford, Peter E; Vision, Todd J; Westerfield, Monte; Mabee, Paula M

    2010-05-20

    The wealth of phenotypic descriptions documented in the published articles, monographs, and dissertations of phylogenetic systematics is traditionally reported in a free-text format, and it is therefore largely inaccessible for linkage to biological databases for genetics, development, and phenotypes, and difficult to manage for large-scale integrative work. The Phenoscape project aims to represent these complex and detailed descriptions with rich and formal semantics that are amenable to computation and integration with phenotype data from other fields of biology. This entails reconceptualizing the traditional free-text characters into the computable Entity-Quality (EQ) formalism using ontologies. We used ontologies and the EQ formalism to curate a collection of 47 phylogenetic studies on ostariophysan fishes (including catfishes, characins, minnows, knifefishes) and their relatives with the goal of integrating these complex phenotype descriptions with information from an existing model organism database (zebrafish, http://zfin.org). We developed a curation workflow for the collection of character, taxonomic and specimen data from these publications. A total of 4,617 phenotypic characters (10,512 states) for 3,449 taxa, primarily species, were curated into EQ formalism (for a total of 12,861 EQ statements) using anatomical and taxonomic terms from teleost-specific ontologies (Teleost Anatomy Ontology and Teleost Taxonomy Ontology) in combination with terms from a quality ontology (Phenotype and Trait Ontology). Standards and guidelines for consistently and accurately representing phenotypes were developed in response to the challenges that were evident from two annotation experiments and from feedback from curators. The challenges we encountered and many of the curation standards and methods for improving consistency that we developed are generally applicable to any effort to represent phenotypes using ontologies. This is because an ontological representation of the detailed variations in phenotype, whether between mutant or wildtype, among individual humans, or across the diversity of species, requires a process by which a precise combination of terms from domain ontologies are selected and organized according to logical relations. The efficiencies that we have developed in this process will be useful for any attempt to annotate complex phenotypic descriptions using ontologies. We also discuss some ramifications of EQ representation for the domain of systematics.

  16. Underwater Photogrammetry and Object Modeling: A Case Study of Xlendi Wreck in Malta

    PubMed Central

    Drap, Pierre; Merad, Djamal; Hijazi, Bilal; Gaoua, Lamia; Nawaf, Mohamad Motasem; Saccone, Mauro; Chemisky, Bertrand; Seinturier, Julien; Sourisseau, Jean-Christophe; Gambin, Timmy; Castro, Filipe

    2015-01-01

    In this paper we present a photogrammetry-based approach for deep-sea underwater surveys conducted from a submarine and guided by knowledge-representation combined with a logical approach (ontology). Two major issues are discussed in this paper. The first concerns deep-sea surveys using photogrammetry from a submarine. Here the goal was to obtain a set of images that completely covered the selected site. Subsequently and based on these images, a low-resolution 3D model is obtained in real-time, followed by a very high-resolution model produced back in the laboratory. The second issue involves the extraction of known artefacts present on the site. This aspect of the research is based on an a priori representation of the knowledge involved using systematic reasoning. Two parallel processes were developed to represent the photogrammetric process used for surveying as well as for identifying archaeological artefacts visible on the sea floor. Mapping involved the use of the CIDOC-CRM system (International Committee for Documentation (CIDOC)—Conceptual Reference Model)—This is a system that has been previously utilised to in the heritage sector and is largely available to the established scientific community. The proposed theoretical representation is based on procedural attachment; moreover, a strong link is maintained between the ontological description of the modelled concepts and the Java programming language which permitted 3D structure estimation and modelling based on a set of oriented images. A very recently discovered shipwreck acted as a testing ground for this project; the Xelendi Phoenician shipwreck, found off the Maltese coast, is probably the oldest known shipwreck in the western Mediterranean. The approach presented in this paper was developed in the scope of the GROPLAN project (Généralisation du Relevé, avec Ontologies et Photogrammétrie, pour l'Archéologie Navale et Sous-marine). Financed by the French National Research Agency (ANR) for four years, this project associates two French research laboratories, an industrial partner, the University of Malta, and Texas A & M University. PMID:26690147

  17. Underwater Photogrammetry and Object Modeling: A Case Study of Xlendi Wreck in Malta.

    PubMed

    Drap, Pierre; Merad, Djamal; Hijazi, Bilal; Gaoua, Lamia; Nawaf, Mohamad Motasem; Saccone, Mauro; Chemisky, Bertrand; Seinturier, Julien; Sourisseau, Jean-Christophe; Gambin, Timmy; Castro, Filipe

    2015-12-04

    In this paper we present a photogrammetry-based approach for deep-sea underwater surveys conducted from a submarine and guided by knowledge-representation combined with a logical approach (ontology). Two major issues are discussed in this paper. The first concerns deep-sea surveys using photogrammetry from a submarine. Here the goal was to obtain a set of images that completely covered the selected site. Subsequently and based on these images, a low-resolution 3D model is obtained in real-time, followed by a very high-resolution model produced back in the laboratory. The second issue involves the extraction of known artefacts present on the site. This aspect of the research is based on an a priori representation of the knowledge involved using systematic reasoning. Two parallel processes were developed to represent the photogrammetric process used for surveying as well as for identifying archaeological artefacts visible on the sea floor. Mapping involved the use of the CIDOC-CRM system (International Committee for Documentation (CIDOC)-Conceptual Reference Model)-This is a system that has been previously utilised to in the heritage sector and is largely available to the established scientific community. The proposed theoretical representation is based on procedural attachment; moreover, a strong link is maintained between the ontological description of the modelled concepts and the Java programming language which permitted 3D structure estimation and modelling based on a set of oriented images. A very recently discovered shipwreck acted as a testing ground for this project; the Xelendi Phoenician shipwreck, found off the Maltese coast, is probably the oldest known shipwreck in the western Mediterranean. The approach presented in this paper was developed in the scope of the GROPLAN project (Généralisation du Relevé, avec Ontologies et Photogrammétrie, pour l'Archéologie Navale et Sous-marine). Financed by the French National Research Agency (ANR) for four years, this project associates two French research laboratories, an industrial partner, the University of Malta, and Texas A & M University.

  18. An Application of Structural Equation Modeling for Developing Good Teaching Characteristics Ontology

    ERIC Educational Resources Information Center

    Phiakoksong, Somjin; Niwattanakul, Suphakit; Angskun, Thara

    2013-01-01

    Ontology is a knowledge representation technique which aims to make knowledge explicit by defining the core concepts and their relationships. The Structural Equation Modeling (SEM) is a statistical technique which aims to explore the core factors from empirical data and estimates the relationship between these factors. This article presents an…

  19. Formalizing Evidence Type Definitions for Drug-Drug Interaction Studies to Improve Evidence Base Curation.

    PubMed

    Utecht, Joseph; Brochhausen, Mathias; Judkins, John; Schneider, Jodi; Boyce, Richard D

    2017-01-01

    In this research we aim to demonstrate that an ontology-based system can categorize potential drug-drug interaction (PDDI) evidence items into complex types based on a small set of simple questions. Such a method could increase the transparency and reliability of PDDI evidence evaluation, while also reducing the variations in content and seriousness ratings present in PDDI knowledge bases. We extended the DIDEO ontology with 44 formal evidence type definitions. We then manually annotated the evidence types of 30 evidence items. We tested an RDF/OWL representation of answers to a small number of simple questions about each of these 30 evidence items and showed that automatic inference can determine the detailed evidence types based on this small number of simpler questions. These results show proof-of-concept for a decision support infrastructure that frees the evidence evaluator from mastering relatively complex written evidence type definitions.

  20. Ontology-based topic clustering for online discussion data

    NASA Astrophysics Data System (ADS)

    Wang, Yongheng; Cao, Kening; Zhang, Xiaoming

    2013-03-01

    With the rapid development of online communities, mining and extracting quality knowledge from online discussions becomes very important for the industrial and marketing sector, as well as for e-commerce applications and government. Most of the existing techniques model a discussion as a social network of users represented by a user-based graph without considering the content of the discussion. In this paper we propose a new multilayered mode to analysis online discussions. The user-based and message-based representation is combined in this model. A novel frequent concept sets based clustering method is used to cluster the original online discussion network into topic space. Domain ontology is used to improve the clustering accuracy. Parallel methods are also used to make the algorithms scalable to very large data sets. Our experimental study shows that the model and algorithms are effective when analyzing large scale online discussion data.

  1. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    PubMed

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  2. The Ontology of Biological and Clinical Statistics (OBCS) for standardized and reproducible statistical analysis.

    PubMed

    Zheng, Jie; Harris, Marcelline R; Masci, Anna Maria; Lin, Yu; Hero, Alfred; Smith, Barry; He, Yongqun

    2016-09-14

    Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . The Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.

  3. An ontology of scientific experiments

    PubMed Central

    Soldatova, Larisa N; King, Ross D

    2006-01-01

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  4. linkedISA: semantic representation of ISA-Tab experimental metadata.

    PubMed

    González-Beltrán, Alejandra; Maguire, Eamonn; Sansone, Susanna-Assunta; Rocca-Serra, Philippe

    2014-01-01

    Reporting and sharing experimental metadata- such as the experimental design, characteristics of the samples, and procedures applied, along with the analysis results, in a standardised manner ensures that datasets are comprehensible and, in principle, reproducible, comparable and reusable. Furthermore, sharing datasets in formats designed for consumption by humans and machines will also maximize their use. The Investigation/Study/Assay (ISA) open source metadata tracking framework facilitates standards-compliant collection, curation, visualization, storage and sharing of datasets, leveraging on other platforms to enable analysis and publication. The ISA software suite includes several components used in increasingly diverse set of life science and biomedical domains; it is underpinned by a general-purpose format, ISA-Tab, and conversions exist into formats required by public repositories. While ISA-Tab works well mainly as a human readable format, we have also implemented a linked data approach to semantically define the ISA-Tab syntax. We present a semantic web representation of the ISA-Tab syntax that complements ISA-Tab's syntactic interoperability with semantic interoperability. We introduce the linkedISA conversion tool from ISA-Tab to the Resource Description Framework (RDF), supporting mappings from the ISA syntax to multiple community-defined, open ontologies and capitalising on user-provided ontology annotations in the experimental metadata. We describe insights of the implementation and how annotations can be expanded driven by the metadata. We applied the conversion tool as part of Bio-GraphIIn, a web-based application supporting integration of the semantically-rich experimental descriptions. Designed in a user-friendly manner, the Bio-GraphIIn interface hides most of the complexities to the users, exposing a familiar tabular view of the experimental description to allow seamless interaction with the RDF representation, and visualising descriptors to drive the query over the semantic representation of the experimental design. In addition, we defined queries over the linkedISA RDF representation and demonstrated its use over the linkedISA conversion of datasets from Nature' Scientific Data online publication. Our linked data approach has allowed us to: 1) make the ISA-Tab semantics explicit and machine-processable, 2) exploit the existing ontology-based annotations in the ISA-Tab experimental descriptions, 3) augment the ISA-Tab syntax with new descriptive elements, 4) visualise and query elements related to the experimental design. Reasoning over ISA-Tab metadata and associated data will facilitate data integration and knowledge discovery.

  5. Semantic representation of reported measurements in radiology.

    PubMed

    Oberkampf, Heiner; Zillner, Sonja; Overton, James A; Bauer, Bernhard; Cavallaro, Alexander; Uder, Michael; Hammon, Matthias

    2016-01-22

    In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time. We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect. The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.

  6. Clinical data integration model. Core interoperability ontology for research using primary care data.

    PubMed

    Ethier, J-F; Curcin, V; Barton, A; McGilchrist, M M; Bastiaens, H; Andreasson, A; Rossiter, J; Zhao, L; Arvanitis, T N; Taweel, A; Delaney, B C; Burgun, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance. TRANSFoRm's general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care. TRANSFoRm utilizes a unified structural / terminological interoperability framework, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care. The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm's use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM. A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.

  7. Expressing Biomedical Ontologies in Natural Language for Expert Evaluation.

    PubMed

    Amith, Muhammad; Manion, Frank J; Harris, Marcelline R; Zhang, Yaoyun; Xu, Hua; Tao, Cui

    2017-01-01

    We report on a study of our custom Hootation software for the purposes of assessing its ability to produce clear and accurate natural language phrases from axioms embedded in three biomedical ontologies. Using multiple domain experts and three discrete rating scales, we evaluated the tool on clarity of the natural language produced, fidelity of the natural language produced from the ontology to the axiom, and the fidelity of the domain knowledge represented by the axioms. Results show that Hootation provided relatively clear natural language equivalents for a select set of OWL axioms, although the clarity of statements hinges on the accuracy and representation of axioms in the ontology.

  8. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    NASA Astrophysics Data System (ADS)

    Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter

    2014-05-01

    This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.

  9. An interactive ontology-driven information system for simulating background radiation and generating scenarios for testing special nuclear materials detection algorithms

    DOE PAGES

    Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...

    2015-05-22

    This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less

  10. Conceptual model of knowledge base system

    NASA Astrophysics Data System (ADS)

    Naykhanova, L. V.; Naykhanova, I. V.

    2018-05-01

    In the article, the conceptual model of the knowledge based system by the type of the production system is provided. The production system is intended for automation of problems, which solution is rigidly conditioned by the legislation. A core component of the system is a knowledge base. The knowledge base consists of a facts set, a rules set, the cognitive map and ontology. The cognitive map is developed for implementation of a control strategy, ontology - the explanation mechanism. Knowledge representation about recognition of a situation in the form of rules allows describing knowledge of the pension legislation. This approach provides the flexibility, originality and scalability of the system. In the case of changing legislation, it is necessary to change the rules set. This means that the change of the legislation would not be a big problem. The main advantage of the system is that there is an opportunity to be adapted easily to changes of the legislation.

  11. The chemical information ontology: provenance and disambiguation for chemical data on the biological semantic web.

    PubMed

    Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel

    2011-01-01

    Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA).

  12. The Chemical Information Ontology: Provenance and Disambiguation for Chemical Data on the Biological Semantic Web

    PubMed Central

    Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel

    2011-01-01

    Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA). PMID:21991315

  13. Pharmacogenomic knowledge representation, reasoning and genome-based clinical decision support based on OWL 2 DL ontologies.

    PubMed

    Samwald, Matthias; Miñarro Giménez, Jose Antonio; Boyce, Richard D; Freimuth, Robert R; Adlassnig, Klaus-Peter; Dumontier, Michel

    2015-02-22

    Every year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics. We developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles. Our methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners. The ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach.

  14. Using ontology-based semantic similarity to facilitate the article screening process for systematic reviews.

    PubMed

    Ji, Xiaonan; Ritter, Alan; Yen, Po-Yin

    2017-05-01

    Systematic Reviews (SRs) are utilized to summarize evidence from high quality studies and are considered the preferred source of evidence-based practice (EBP). However, conducting SRs can be time and labor intensive due to the high cost of article screening. In previous studies, we demonstrated utilizing established (lexical) article relationships to facilitate the identification of relevant articles in an efficient and effective manner. Here we propose to enhance article relationships with background semantic knowledge derived from Unified Medical Language System (UMLS) concepts and ontologies. We developed a pipelined semantic concepts representation process to represent articles from an SR into an optimized and enriched semantic space of UMLS concepts. Throughout the process, we leveraged concepts and concept relations encoded in biomedical ontologies (SNOMED-CT and MeSH) within the UMLS framework to prompt concept features of each article. Article relationships (similarities) were established and represented as a semantic article network, which was readily applied to assist with the article screening process. We incorporated the concept of active learning to simulate an interactive article recommendation process, and evaluated the performance on 15 completed SRs. We used work saved over sampling at 95% recall (WSS95) as the performance measure. We compared the WSS95 performance of our ontology-based semantic approach to existing lexical feature approaches and corpus-based semantic approaches, and found that we had better WSS95 in most SRs. We also had the highest average WSS95 of 43.81% and the highest total WSS95 of 657.18%. We demonstrated using ontology-based semantics to facilitate the identification of relevant articles for SRs. Effective concepts and concept relations derived from UMLS ontologies can be utilized to establish article semantic relationships. Our approach provided a promising performance and can easily apply to any SR topics in the biomedical domain with generalizability. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Knowledge representation of rock plastic deformation

    NASA Astrophysics Data System (ADS)

    Davarpanah, Armita; Babaie, Hassan

    2017-04-01

    The first iteration of the Rock Plastic Deformation (RPD) ontology models the semantics of the dynamic physical and chemical processes and mechanisms that occur during the deformation of the generally inhomogeneous polycrystalline rocks. The ontology represents the knowledge about the production, reconfiguration, displacement, and consumption of the structural components that participate in these processes. It also formalizes the properties that are known by the structural geology and metamorphic petrology communities to hold between the instances of the spatial components and the dynamic processes, the state and system variables, the empirical flow laws that relate the variables, and the laboratory testing conditions and procedures. The modeling of some of the complex physio-chemical, mathematical, and informational concepts and relations of the RPD ontology is based on the class and property structure of some well-established top-level ontologies. The flexible and extensible design of the initial version of the RPD ontology allows it to develop into a model that more fully represents the knowledge of plastic deformation of rocks under different spatial and temporal scales in the laboratory and in solid Earth. The ontology will be used to annotate the datasets related to the microstructures and physical-chemical processes that involve them. This will help the autonomous and globally distributed communities of experimental structural geologists and metamorphic petrologists to coherently and uniformly distribute, discover, access, share, and use their data through automated reasoning and enhanced data integration and software interoperability.

  16. Ontological Encoding of GeoSciML and INSPIRE geological standard vocabularies and schemas: application to geological mapping

    NASA Astrophysics Data System (ADS)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario; Fubelli, Giandomenico; Giardino, Marco

    2016-04-01

    Encoding of geologic knowledge in formal languages is an ambitious task, aiming at the interoperability and organic representation of geological data, and semantic characterization of geologic maps. Initiatives such as GeoScience Markup Language (last version is GeoSciML 4, 2015[1]) and INSPIRE "Data Specification on Geology" (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013[2]), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG[3]) have been promoting information exchange of the geologic knowledge. There have also been limited attempts to encode the knowledge in a machine-readable format, especially in the lithology domain (see e.g. the CGI_Lithology ontology[4]), but a comprehensive ontological model that connect the several knowledge sources is still lacking. This presentation concerns the "OntoGeonous" initiative, which aims at encoding the geologic knowledge, as expressed through the standard vocabularies, schemas and data models mentioned above, through a number of interlinked computational ontologies, based on the languages of the Semantic Web and the paradigm of Linked Open Data. The initiative proceeds in parallel with a concrete case study, concerning the setting up of a synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap" (developed by the CNR Institute of Geosciences and Earth Resources, CNR IGG, Torino), where the description and classification of GeologicUnits has been supported by the modeling and implementation of the ontologies. We have devised a tripartite ontological model called OntoGeonous that consists of: 1) an ontology of the geologic features (in particular, GeologicUnit, GeomorphologicFeature, and GeologicStructure[5], modeled from the definitions and UML schemata of CGI vocabularies[6], GeoScienceML and INSPIRE, and aligned with the Planetary realm of NASA SWEET ontology[7]), 2) an ontology of the Earth materials (as defined by the SimpleLithology CGI vocabulary and aligned as a subclass of the Substance class in NASA SWEET ontology), and 3) an ontology of the MappedFeatures (as defined in the Representation sub-taxonomy of the NASA SWEET ontology). The latter correspond to the concrete elements of the map, with their geometry (polygons, lines) and geographical coordinates. The ontology model has been developed by taking into account applications primarily concerning the needs of geological mapping; nevertheless, the model is general enough to be applied to other contexts. In particular, we show how the automatic reasoning capabilities of the ontology system can be employed in tasks of unit definition and input filling of the map database and for supporting geologists in thematic re-classification of the map instances (e.g. for coloring tasks). ---------------------------------------- [1] http://www.geosciml.org [2] http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf [3] http://www.cgi-iugs.org/tech_collaboration/geoscience_terminology_working_group.html [4] https://www.seegrid.csiro.au/subversion/CGI_CDTGVocabulary/trunk/OwlWork/CGI_Lithology.owl [5] We are currently neglecting the encoding of the geologic events, left as a future work. [6] http://resource.geosciml.org/vocabulary/cgi/201211/ [7] Web site: https://sweet.jpl.nasa.gov, Di Giuseppe et al., 2013, SWEET ontology coverage for earth system sciences, http://www.ics.uci.edu/~ndigiuse/Nicholas_DiGiuseppe/Research_files/digiuseppe14.pdf; S. Barahmand et al. 2009, A Survey on SWEET Ontologies and their Applications, http://www-scf.usc.edu/~taheriya/reports/csci586-report.pdf

  17. Panacea, a semantic-enabled drug recommendations discovery framework.

    PubMed

    Doulaverakis, Charalampos; Nikolaidis, George; Kleontas, Athanasios; Kompatsiaris, Ioannis

    2014-03-06

    Personalized drug prescription can be benefited from the use of intelligent information management and sharing. International standard classifications and terminologies have been developed in order to provide unique and unambiguous information representation. Such standards can be used as the basis of automated decision support systems for providing drug-drug and drug-disease interaction discovery. Additionally, Semantic Web technologies have been proposed in earlier works, in order to support such systems. The paper presents Panacea, a semantic framework capable of offering drug-drug and drug-diseases interaction discovery. For enabling this kind of service, medical information and terminology had to be translated to ontological terms and be appropriately coupled with medical knowledge of the field. International standard classifications and terminologies, provide the backbone of the common representation of medical data while the medical knowledge of drug interactions is represented by a rule base which makes use of the aforementioned standards. Representation is based on a lightweight ontology. A layered reasoning approach is implemented where at the first layer ontological inference is used in order to discover underlying knowledge, while at the second layer a two-step rule selection strategy is followed resulting in a computationally efficient reasoning approach. Details of the system architecture are presented while also giving an outline of the difficulties that had to be overcome. Panacea is evaluated both in terms of quality of recommendations against real clinical data and performance. The quality recommendation gave useful insights regarding requirements for real world deployment and revealed several parameters that affected the recommendation results. Performance-wise, Panacea is compared to a previous published work by the authors, a service for drug recommendations named GalenOWL, and presents their differences in modeling and approach to the problem, while also pinpointing the advantages of Panacea. Overall, the paper presents a framework for providing an efficient drug recommendations service where Semantic Web technologies are coupled with traditional business rule engines.

  18. Modeling functional neuroanatomy for an anatomy information system.

    PubMed

    Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.

  19. PDON: Parkinson's disease ontology for representation and modeling of the Parkinson's disease knowledge domain.

    PubMed

    Younesi, Erfan; Malhotra, Ashutosh; Gündel, Michaela; Scordis, Phil; Kodamullil, Alpha Tom; Page, Matt; Müller, Bernd; Springstubbe, Stephan; Wüllner, Ullrich; Scheller, Dieter; Hofmann-Apitius, Martin

    2015-09-22

    Despite the unprecedented and increasing amount of data, relatively little progress has been made in molecular characterization of mechanisms underlying Parkinson's disease. In the area of Parkinson's research, there is a pressing need to integrate various pieces of information into a meaningful context of presumed disease mechanism(s). Disease ontologies provide a novel means for organizing, integrating, and standardizing the knowledge domains specific to disease in a compact, formalized and computer-readable form and serve as a reference for knowledge exchange or systems modeling of disease mechanism. The Parkinson's disease ontology was built according to the life cycle of ontology building. Structural, functional, and expert evaluation of the ontology was performed to ensure the quality and usability of the ontology. A novelty metric has been introduced to measure the gain of new knowledge using the ontology. Finally, a cause-and-effect model was built around PINK1 and two gene expression studies from the Gene Expression Omnibus database were re-annotated to demonstrate the usability of the ontology. The Parkinson's disease ontology with a subclass-based taxonomic hierarchy covers the broad spectrum of major biomedical concepts from molecular to clinical features of the disease, and also reflects different views on disease features held by molecular biologists, clinicians and drug developers. The current version of the ontology contains 632 concepts, which are organized under nine views. The structural evaluation showed the balanced dispersion of concept classes throughout the ontology. The functional evaluation demonstrated that the ontology-driven literature search could gain novel knowledge not present in the reference Parkinson's knowledge map. The ontology was able to answer specific questions related to Parkinson's when evaluated by experts. Finally, the added value of the Parkinson's disease ontology is demonstrated by ontology-driven modeling of PINK1 and re-annotation of gene expression datasets relevant to Parkinson's disease. Parkinson's disease ontology delivers the knowledge domain of Parkinson's disease in a compact, computer-readable form, which can be further edited and enriched by the scientific community and also to be used to construct, represent and automatically extend Parkinson's-related computable models. A practical version of the Parkinson's disease ontology for browsing and editing can be publicly accessed at http://bioportal.bioontology.org/ontologies/PDON .

  20. Assessing the practice of biomedical ontology evaluation: Gaps and opportunities.

    PubMed

    Amith, Muhammad; He, Zhe; Bian, Jiang; Lossio-Ventura, Juan Antonio; Tao, Cui

    2018-04-01

    With the proliferation of heterogeneous health care data in the last three decades, biomedical ontologies and controlled biomedical terminologies play a more and more important role in knowledge representation and management, data integration, natural language processing, as well as decision support for health information systems and biomedical research. Biomedical ontologies and controlled terminologies are intended to assure interoperability. Nevertheless, the quality of biomedical ontologies has hindered their applicability and subsequent adoption in real-world applications. Ontology evaluation is an integral part of ontology development and maintenance. In the biomedicine domain, ontology evaluation is often conducted by third parties as a quality assurance (or auditing) effort that focuses on identifying modeling errors and inconsistencies. In this work, we first organized four categorical schemes of ontology evaluation methods in the existing literature to create an integrated taxonomy. Further, to understand the ontology evaluation practice in the biomedicine domain, we reviewed a sample of 200 ontologies from the National Center for Biomedical Ontology (NCBO) BioPortal-the largest repository for biomedical ontologies-and observed that only 15 of these ontologies have documented evaluation in their corresponding inception papers. We then surveyed the recent quality assurance approaches for biomedical ontologies and their use. We also mapped these quality assurance approaches to the ontology evaluation criteria. It is our anticipation that ontology evaluation and quality assurance approaches will be more widely adopted in the development life cycle of biomedical ontologies. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Collaborative development of predictive toxicology applications

    PubMed Central

    2010-01-01

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals. The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation. Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way. PMID:20807436

  2. Collaborative development of predictive toxicology applications.

    PubMed

    Hardy, Barry; Douglas, Nicki; Helma, Christoph; Rautenberg, Micha; Jeliazkova, Nina; Jeliazkov, Vedrin; Nikolova, Ivelina; Benigni, Romualdo; Tcheremenskaia, Olga; Kramer, Stefan; Girschick, Tobias; Buchwald, Fabian; Wicker, Joerg; Karwath, Andreas; Gütlein, Martin; Maunz, Andreas; Sarimveis, Haralambos; Melagraki, Georgia; Afantitis, Antreas; Sopasakis, Pantelis; Gallagher, David; Poroikov, Vladimir; Filimonov, Dmitry; Zakharov, Alexey; Lagunin, Alexey; Gloriozova, Tatyana; Novikov, Sergey; Skvortsova, Natalia; Druzhilovsky, Dmitry; Chawla, Sunil; Ghosh, Indira; Ray, Surajit; Patel, Hitesh; Escher, Sylvia

    2010-08-31

    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.

  3. An ontology-based search engine for protein-protein interactions

    PubMed Central

    2010-01-01

    Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195

  4. An ontology-based search engine for protein-protein interactions.

    PubMed

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  5. The neurobehavior ontology: an ontology for annotation and integration of behavior and behavioral phenotypes.

    PubMed

    Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert

    2012-01-01

    In recent years, considerable advances have been made toward our understanding of the genetic architecture of behavior and the physical, mental, and environmental influences that underpin behavioral processes. The provision of a method for recording behavior-related phenomena is necessary to enable integrative and comparative analyses of data and knowledge about behavior. The neurobehavior ontology facilitates the systematic representation of behavior and behavioral phenotypes, thereby improving the unification and integration behavioral data in neuroscience research. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. BiNGO: a Cytoscape plugin to assess overrepresentation of gene ontology categories in biological networks.

    PubMed

    Maere, Steven; Heymans, Karel; Kuiper, Martin

    2005-08-15

    The Biological Networks Gene Ontology tool (BiNGO) is an open-source Java tool to determine which Gene Ontology (GO) terms are significantly overrepresented in a set of genes. BiNGO can be used either on a list of genes, pasted as text, or interactively on subgraphs of biological networks visualized in Cytoscape. BiNGO maps the predominant functional themes of the tested gene set on the GO hierarchy, and takes advantage of Cytoscape's versatile visualization environment to produce an intuitive and customizable visual representation of the results.

  7. KA-SB: from data integration to large scale reasoning

    PubMed Central

    Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F

    2009-01-01

    Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402

  8. Dovetailing biology and chemistry: integrating the Gene Ontology with the ChEBI chemical ontology

    PubMed Central

    2013-01-01

    Background The Gene Ontology (GO) facilitates the description of the action of gene products in a biological context. Many GO terms refer to chemical entities that participate in biological processes. To facilitate accurate and consistent systems-wide biological representation, it is necessary to integrate the chemical view of these entities with the biological view of GO functions and processes. We describe a collaborative effort between the GO and the Chemical Entities of Biological Interest (ChEBI) ontology developers to ensure that the representation of chemicals in the GO is both internally consistent and in alignment with the chemical expertise captured in ChEBI. Results We have examined and integrated the ChEBI structural hierarchy into the GO resource through computationally-assisted manual curation of both GO and ChEBI. Our work has resulted in the creation of computable definitions of GO terms that contain fully defined semantic relationships to corresponding chemical terms in ChEBI. Conclusions The set of logical definitions using both the GO and ChEBI has already been used to automate aspects of GO development and has the potential to allow the integration of data across the domains of biology and chemistry. These logical definitions are available as an extended version of the ontology from http://purl.obolibrary.org/obo/go/extensions/go-plus.owl. PMID:23895341

  9. Computational framework to support integration of biomolecular and clinical data within a translational approach.

    PubMed

    Miyoshi, Newton Shydeo Brandão; Pinheiro, Daniel Guariz; Silva, Wilson Araújo; Felipe, Joaquim Cezar

    2013-06-06

    The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. We have implemented an extension of Chado - the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different "omics" technologies with patient's clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans.

  10. Software-engineering challenges of building and deploying reusable problem solvers.

    PubMed

    O'Connor, Martin J; Nyulas, Csongor; Tu, Samson; Buckeridge, David L; Okhmatovskaia, Anna; Musen, Mark A

    2009-11-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task-method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.

  11. Software-engineering challenges of building and deploying reusable problem solvers

    PubMed Central

    O’CONNOR, MARTIN J.; NYULAS, CSONGOR; TU, SAMSON; BUCKERIDGE, DAVID L.; OKHMATOVSKAIA, ANNA; MUSEN, MARK A.

    2012-01-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach. PMID:23565031

  12. The MGED Ontology: a resource for semantics-based description of microarray experiments.

    PubMed

    Whetzel, Patricia L; Parkinson, Helen; Causton, Helen C; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Game, Laurence; Heiskanen, Mervi; Morrison, Norman; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; White, Joseph; Stoeckert, Christian J

    2006-04-01

    The generation of large amounts of microarray data and the need to share these data bring challenges for both data management and annotation and highlights the need for standards. MIAME specifies the minimum information needed to describe a microarray experiment and the Microarray Gene Expression Object Model (MAGE-OM) and resulting MAGE-ML provide a mechanism to standardize data representation for data exchange, however a common terminology for data annotation is needed to support these standards. Here we describe the MGED Ontology (MO) developed by the Ontology Working Group of the Microarray Gene Expression Data (MGED) Society. The MO provides terms for annotating all aspects of a microarray experiment from the design of the experiment and array layout, through to the preparation of the biological sample and the protocols used to hybridize the RNA and analyze the data. The MO was developed to provide terms for annotating experiments in line with the MIAME guidelines, i.e. to provide the semantics to describe a microarray experiment according to the concepts specified in MIAME. The MO does not attempt to incorporate terms from existing ontologies, e.g. those that deal with anatomical parts or developmental stages terms, but provides a framework to reference terms in other ontologies and therefore facilitates the use of ontologies in microarray data annotation. The MGED Ontology version.1.2.0 is available as a file in both DAML and OWL formats at http://mged.sourceforge.net/ontologies/index.php. Release notes and annotation examples are provided. The MO is also provided via the NCICB's Enterprise Vocabulary System (http://nciterms.nci.nih.gov/NCIBrowser/Dictionary.do). Stoeckrt@pcbi.upenn.edu Supplementary data are available at Bioinformatics online.

  13. Galen-In-Use: using artificial intelligence terminology tools to improve the linguistic coherence of a national coding system for surgical procedures.

    PubMed

    Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F

    1998-01-01

    GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.

  14. An Interlingual-based Approach to Reference Resolution

    DTIC Science & Technology

    2000-01-01

    unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 They do not generally consider implicit references or...AMOUNT: C [currency] etc. where TIME, LOCATION, AGENT, THEME, human, organization, object, etc. are all ontological concepts. On some particular...AMOUNT: C [currency] etc. These instantiated representational objects are, in turn, referents in the discourse context when the next sentence is

  15. Accommodating Ontologies to Biological Reality—Top-Level Categories of Cumulative-Constitutively Organized Material Entities

    PubMed Central

    Vogt, Lars; Grobe, Peter; Quast, Björn; Bartolomaeus, Thomas

    2012-01-01

    Background The Basic Formal Ontology (BFO) is a top-level formal foundational ontology for the biomedical domain. It has been developed with the purpose to serve as an ontologically consistent template for top-level categories of application oriented and domain reference ontologies within the Open Biological and Biomedical Ontologies Foundry (OBO). BFO is important for enabling OBO ontologies to facilitate in reliably communicating and managing data and metadata within and across biomedical databases. Following its intended single inheritance policy, BFO's three top-level categories of material entity (i.e. ‘object’, ‘fiat object part’, ‘object aggregate’) must be exhaustive and mutually disjoint. We have shown elsewhere that for accommodating all types of constitutively organized material entities, BFO must be extended by additional categories of material entity. Methodology/Principal Findings Unfortunately, most biomedical material entities are cumulative-constitutively organized. We show that even the extended BFO does not exhaustively cover cumulative-constitutively organized material entities. We provide examples from biology and everyday life that demonstrate the necessity for ‘portion of matter’ as another material building block. This implies the necessity for further extending BFO by ‘portion of matter’ as well as three additional categories that possess portions of matter as aggregate components. These extensions are necessary if the basic assumption that all parts that share the same granularity level exhaustively sum to the whole should also apply to cumulative-constitutively organized material entities. By suggesting a notion of granular representation we provide a way to maintain the single inheritance principle when dealing with cumulative-constitutively organized material entities. Conclusions/Significance We suggest to extend BFO to incorporate additional categories of material entity and to rearrange its top-level material entity taxonomy. With these additions and the notion of granular representation, BFO would exhaustively cover all top-level types of material entities that application oriented ontologies may use as templates, while still maintaining the single inheritance principle. PMID:22253856

  16. Text Mining to inform construction of Earth and Environmental Science Ontologies

    NASA Astrophysics Data System (ADS)

    Schildhauer, M.; Adams, B.; Rebich Hespanha, S.

    2013-12-01

    There is a clear need for better semantic representation of Earth and environmental concepts, to facilitate more effective discovery and re-use of information resources relevant to scientists doing integrative research. In order to develop general-purpose Earth and environmental science ontologies, however, it is necessary to represent concepts and relationships that span usage across multiple disciplines and scientific specialties. Traditional knowledge modeling through ontologies utilizes expert knowledge but inevitably favors the particular perspectives of the ontology engineers, as well as the domain experts who interacted with them. This often leads to ontologies that lack robust coverage of synonymy, while also missing important relationships among concepts that can be extremely useful for working scientists to be aware of. In this presentation we will discuss methods we have developed that utilize statistical topic modeling on a large corpus of Earth and environmental science articles, to expand coverage and disclose relationships among concepts in the Earth sciences. For our work we collected a corpus of over 121,000 abstracts from many of the top Earth and environmental science journals. We performed latent Dirichlet allocation topic modeling on this corpus to discover a set of latent topics, which consist of terms that commonly co-occur in abstracts. We match terms in the topics to concept labels in existing ontologies to reveal gaps, and we examine which terms are commonly associated in natural language discourse, to identify relationships that are important to formally model in ontologies. Our text mining methodology uncovers significant gaps in the content of some popular existing ontologies, and we show how, through a workflow involving human interpretation of topic models, we can bootstrap ontologies to have much better coverage and richer semantics. Because we base our methods directly on what working scientists are communicating about their research, it gives us an alternative bottom-up approach to populating and enriching ontologies, that complements more traditional knowledge modeling endeavors.

  17. A rule-based ontological framework for the classification of molecules

    PubMed Central

    2014-01-01

    Background A variety of key activities within life sciences research involves integrating and intelligently managing large amounts of biochemical information. Semantic technologies provide an intuitive way to organise and sift through these rapidly growing datasets via the design and maintenance of ontology-supported knowledge bases. To this end, OWL—a W3C standard declarative language— has been extensively used in the deployment of biochemical ontologies that can be conveniently organised using the classification facilities of OWL-based tools. One of the most established ontologies for the chemical domain is ChEBI, an open-access dictionary of molecular entities that supplies high quality annotation and taxonomical information for biologically relevant compounds. However, ChEBI is being manually expanded which hinders its potential to grow due to the limited availability of human resources. Results In this work, we describe a prototype that performs automatic classification of chemical compounds. The software we present implements a sound and complete reasoning procedure of a formalism that extends datalog and builds upon an off-the-shelf deductive database system. We capture a wide range of chemical classes that are not expressible with OWL-based formalisms such as cyclic molecules, saturated molecules and alkanes. Furthermore, we describe a surface ‘less-logician-like’ syntax that allows application experts to create ontological descriptions of complex biochemical objects without prior knowledge of logic. In terms of performance, a noticeable improvement is observed in comparison with previous approaches. Our evaluation has discovered subsumptions that are missing from the manually curated ChEBI ontology as well as discrepancies with respect to existing subclass relations. We illustrate thus the potential of an ontology language suitable for the life sciences domain that exhibits a favourable balance between expressive power and practical feasibility. Conclusions Our proposed methodology can form the basis of an ontology-mediated application to assist biocurators in the production of complete and error-free taxonomies. Moreover, such a tool could contribute to a more rapid development of the ChEBI ontology and to the efforts of the ChEBI team to make annotated chemical datasets available to the public. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm based on rules for which state-of-the-art and highly optimised reasoners are available; it could thus pave the way for the representation of a broader spectrum of life sciences and biomedical knowledge. PMID:24735706

  18. A rule-based ontological framework for the classification of molecules.

    PubMed

    Magka, Despoina; Krötzsch, Markus; Horrocks, Ian

    2014-01-01

    A variety of key activities within life sciences research involves integrating and intelligently managing large amounts of biochemical information. Semantic technologies provide an intuitive way to organise and sift through these rapidly growing datasets via the design and maintenance of ontology-supported knowledge bases. To this end, OWL-a W3C standard declarative language- has been extensively used in the deployment of biochemical ontologies that can be conveniently organised using the classification facilities of OWL-based tools. One of the most established ontologies for the chemical domain is ChEBI, an open-access dictionary of molecular entities that supplies high quality annotation and taxonomical information for biologically relevant compounds. However, ChEBI is being manually expanded which hinders its potential to grow due to the limited availability of human resources. In this work, we describe a prototype that performs automatic classification of chemical compounds. The software we present implements a sound and complete reasoning procedure of a formalism that extends datalog and builds upon an off-the-shelf deductive database system. We capture a wide range of chemical classes that are not expressible with OWL-based formalisms such as cyclic molecules, saturated molecules and alkanes. Furthermore, we describe a surface 'less-logician-like' syntax that allows application experts to create ontological descriptions of complex biochemical objects without prior knowledge of logic. In terms of performance, a noticeable improvement is observed in comparison with previous approaches. Our evaluation has discovered subsumptions that are missing from the manually curated ChEBI ontology as well as discrepancies with respect to existing subclass relations. We illustrate thus the potential of an ontology language suitable for the life sciences domain that exhibits a favourable balance between expressive power and practical feasibility. Our proposed methodology can form the basis of an ontology-mediated application to assist biocurators in the production of complete and error-free taxonomies. Moreover, such a tool could contribute to a more rapid development of the ChEBI ontology and to the efforts of the ChEBI team to make annotated chemical datasets available to the public. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm based on rules for which state-of-the-art and highly optimised reasoners are available; it could thus pave the way for the representation of a broader spectrum of life sciences and biomedical knowledge.

  19. The Contribution of Non-Representational Theories in Education: Some Affective, Ethical and Political Implications

    ERIC Educational Resources Information Center

    Zembylas, Michalinos

    2017-01-01

    This paper follows recent debates around theorizations of "affect" and its distinction from "emotion" in the context of non-representational theories (NRT) to exemplify how the ontologization of affects creates important openings of ethical and political potential in educators' efforts to make productive interventions in…

  20. Ontology-driven education: Teaching anatomy with intelligent 3D games on the web

    NASA Astrophysics Data System (ADS)

    Nilsen, Trond

    Human anatomy is a challenging and intimidating subject whose understanding is essential to good medical practice, taught primarily using a combination of lectures and the dissection of human cadavers. Lectures are cheap and scalable, but do a poor job of teaching spatial understanding, whereas dissection lets students experience the body's interior first-hand, but is expensive, cannot be repeated, and is often imperfect. Educational games and online learning activities have the potential to supplement these teaching methods in a cheap and relatively effective way, but they are difficult for educators to customize for particular curricula and lack the tutoring support that human instructors provide. I present an approach to the creation of learning activities for anatomy called ontology-driven education, in which the Foundational Model of Anatomy, an ontological representation of knowledge about anatomy, is leveraged to generate educational content, model student knowledge, and support learning activities and games in a configurable web-based educational framework for anatomy.

  1. Ontological Foundations for Tracking Data Quality through the Internet of Things.

    PubMed

    Ceusters, Werner; Bona, Jonathan

    2016-01-01

    Amongst the positive outcomes expected from the Internet of Things for Health are longitudinal patient records that are more complete and less erroneous by complementing manual data entry with automatic data feeds from sensors. Unfortunately, devices are fallible too. Quality control procedures such as inspection, testing and maintenance can prevent devices from producing errors. The additional approach envisioned here is to establish constant data quality monitoring through analytics procedures on patient data that exploit not only the ontological principles ascribed to patients and their bodily features, but also to observation and measurement processes in which devices and patients participate, including the, perhaps erroneous, representations that are generated. Using existing realism-based ontologies, we propose a set of categories that analytics procedures should be able to reason with and highlight the importance of unique identification of not only patients, caregivers and devices, but of everything involved in those measurements. This approach supports the thesis that the majority of what tends to be viewed as 'metadata' are actually data about first-order entities.

  2. Ontology-Based Approach to Social Data Sentiment Analysis: Detection of Adolescent Depression Signals.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2017-07-24

    Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, "academic stresses" and "suicide" contributed negatively to the sentiment of adolescent depression. The ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology. ©Hyesil Jung, Hyeoun-Ae Park, Tae-Min Song. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 24.07.2017.

  3. Ontology-Based Approach to Social Data Sentiment Analysis: Detection of Adolescent Depression Signals

    PubMed Central

    Jung, Hyesil; Song, Tae-Min

    2017-01-01

    Background Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. Objective The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. Methods The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. Results We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, “academic stresses” and “suicide” contributed negatively to the sentiment of adolescent depression. Conclusions The ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology. PMID:28739560

  4. A Prototype Symbolic Model of Canonical Functional Neuroanatomy of the Motor System

    PubMed Central

    Rubin, Daniel L.; Halle, Michael; Musen, Mark; Kikinis, Ron

    2008-01-01

    Recent advances in bioinformatics have opened entire new avenues for organizing, integrating and retrieving neuroscientific data, in a digital, machine-processable format, which can be at the same time understood by humans, using ontological, symbolic data representations. Declarative information stored in ontological format can be perused and maintained by domain experts, interpreted by machines, and serve as basis for a multitude of decision-support, computerized simulation, data mining, and teaching applications. We have developed a prototype symbolic model of canonical neuroanatomy of the motor system. Our symbolic model is intended to support symbolic lookup, logical inference and mathematical modeling by integrating descriptive, qualitative and quantitative functional neuroanatomical knowledge. Furthermore, we show how our approach can be extended to modeling impaired brain connectivity in disease states, such as common movement disorders. In developing our ontology, we adopted a disciplined modeling approach, relying on a set of declared principles, a high-level schema, Aristotelian definitions, and a frame-based authoring system. These features, along with the use of the Unified Medical Language System (UMLS) vocabulary, enable the alignment of our functional ontology with an existing comprehensive ontology of human anatomy, and thus allow for combining the structural and functional views of neuroanatomy for clinical decision support and neuroanatomy teaching applications. Although the scope of our current prototype ontology is limited to a particular functional system in the brain, it may be possible to adapt this approach for modeling other brain functional systems as well. PMID:18164666

  5. Modeling Functional Neuroanatomy for an Anatomy Information System

    PubMed Central

    Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841

  6. Concept Systems and Ontologies: Recommendations for Basic Terminology

    NASA Astrophysics Data System (ADS)

    Klein, Gunnar O.; Smith, Barry

    This essay concerns the problems surrounding the use of the term ``concept'' in current ontology and terminology research. It is based on the constructive dialogue between realist ontology on the one hand and the world of formal standardization of health informatics on the other, but its conclusions are not restricted to the domain of medicine. The term ``concept'' is one of the most misused even in literature and technical standards which attempt to bring clarity. In this paper we propose to use the term ``concept'' in the context of producing defined professional terminologies with one specific and consistent meaning which we propose for adoption as the agreed meaning of the term in future terminological research, and specifically in the development of formal terminologies to be used in computer systems. We also discuss and propose new definitions of a set of cognate terms. We describe the relations governing the realm of concepts, and compare these to the richer and more complex set of relations obtaining between entities in the real world. On this basis we also summarize an associated terminology for ontologies as representations of the real world and a partial mapping between the world of concepts and the world of reality.

  7. Relating UMLS semantic types and task-based ontology to computer-interpretable clinical practice guidelines.

    PubMed

    Kumar, Anand; Ciccarese, Paolo; Quaglini, Silvana; Stefanelli, Mario; Caffi, Ezio; Boiocchi, Lorenzo

    2003-01-01

    Medical knowledge in clinical practice guideline (GL) texts is the source of task-based computer-interpretable clinical guideline models (CIGMs). We have used Unified Medical Language System (UMLS) semantic types (STs) to understand the percentage of GL text which belongs to a particular ST. We also use UMLS semantic network together with the CIGM-specific ontology to derive a semantic meaning behind the GL text. In order to achieve this objective, we took nine GL texts from the National Guideline Clearinghouse (NGC) and marked up the text dealing with a particular ST. The STs we took into consideration were restricted taking into account the requirements of a task-based CIGM. We used DARPA Agent Markup Language and Ontology Inference Layer (DAML + OIL) to create the UMLS and CIGM specific semantic network. For the latter, as a bench test, we used the 1999 WHO-International Society of Hypertension Guidelines for the Management of Hypertension. We took into consideration the UMLS STs closest to the clinical tasks. The percentage of the GL text dealing with the ST "Health Care Activity" and subtypes "Laboratory Procedure", "Diagnostic Procedure" and "Therapeutic or Preventive Procedure" were measured. The parts of text belonging to other STs or comments were separated. A mapping of terms belonging to other STs was done to the STs under "HCA" for representation in DAML + OIL. As a result, we found that the three STs under "HCA" were the predominant STs present in the GL text. In cases where the terms of related STs existed, they were mapped into one of the three STs. The DAML + OIL representation was able to describe the hierarchy in task-based CIGMs. To conclude, we understood that the three STs could be used to represent the semantic network of the task-bases CIGMs. We identified some mapping operators which could be used for the mapping of other STs into these.

  8. The Ontology of Vaccine Adverse Events (OVAE) and its usage in representing and analyzing adverse events associated with US-licensed human vaccines.

    PubMed

    Marcos, Erica; Zhao, Bin; He, Yongqun

    2013-11-26

    Licensed human vaccines can induce various adverse events (AE) in vaccinated patients. Due to the involvement of the whole immune system and complex immunological reactions after vaccination, it is difficult to identify the relations among vaccines, adverse events, and human populations in different age groups. Many known vaccine adverse events (VAEs) have been recorded in the package inserts of US-licensed commercial vaccine products. To better represent and analyze VAEs, we developed the Ontology of Vaccine Adverse Events (OVAE) as an extension of the Ontology of Adverse Events (OAE) and the Vaccine Ontology (VO). Like OAE and VO, OVAE is aligned with the Basic Formal Ontology (BFO). The commercial vaccines and adverse events in OVAE are imported from VO and OAE, respectively. A new population term 'human vaccinee population' is generated and used to define VAE occurrence. An OVAE design pattern is developed to link vaccine, adverse event, vaccinee population, age range, and VAE occurrence. OVAE has been used to represent and classify the adverse events recorded in package insert documents of commercial vaccines licensed by the USA Food and Drug Administration (FDA). OVAE currently includes over 1,300 terms, including 87 distinct types of VAEs associated with 63 human vaccines licensed in the USA. For each vaccine, occurrence rates for every VAE in different age groups have been logically represented in OVAE. SPARQL scripts were developed to query and analyze the OVAE knowledge base data. To demonstrate the usage of OVAE, the top 10 vaccines accompanying with the highest numbers of VAEs and the top 10 VAEs most frequently observed among vaccines were identified and analyzed. Asserted and inferred ontology hierarchies classify VAEs in different levels of AE groups. Different VAE occurrences in different age groups were also analyzed. The ontology-based data representation and integration using the FDA-approved information from the vaccine package insert documents enables the identification of adverse events from vaccination in relation to predefined parts of the population (age groups) and certain groups of vaccines. The resulting ontology-based VAE knowledge base classifies vaccine-specific VAEs and supports better VAE understanding and future rational AE prevention and treatment.

  9. Data structures and apparatuses for representing knowledge

    DOEpatents

    Hohimer, Ryan E; Thomson, Judi R; Harvey, William J; Paulson, Patrick R; Whiting, Mark A; Tratz, Stephen C; Chappell, Alan R; Butner, Robert S

    2014-02-18

    Data structures and apparatuses to represent knowledge are disclosed. The processes can comprise labeling elements in a knowledge signature according to concepts in an ontology and populating the elements with confidence values. The data structures can comprise knowledge signatures stored on computer-readable media. The knowledge signatures comprise a matrix structure having elements labeled according to concepts in an ontology, wherein the value of the element represents a confidence that the concept is present in an information space. The apparatus can comprise a knowledge representation unit having at least one ontology stored on a computer-readable medium, at least one data-receiving device, and a processor configured to generate knowledge signatures by comparing datasets obtained by the data-receiving devices to the ontologies.

  10. Processes, data structures, and apparatuses for representing knowledge

    DOEpatents

    Hohimer, Ryan E [West Richland, WA; Thomson, Judi R [Guelph, CA; Harvey, William J [Richland, WA; Paulson, Patrick R [Pasco, WA; Whiting, Mark A [Richland, WA; Tratz, Stephen C [Richland, WA; Chappell, Alan R [Seattle, WA; Butner, R Scott [Richland, WA

    2011-09-20

    Processes, data structures, and apparatuses to represent knowledge are disclosed. The processes can comprise labeling elements in a knowledge signature according to concepts in an ontology and populating the elements with confidence values. The data structures can comprise knowledge signatures stored on computer-readable media. The knowledge signatures comprise a matrix structure having elements labeled according to concepts in an ontology, wherein the value of the element represents a confidence that the concept is present in an information space. The apparatus can comprise a knowledge representation unit having at least one ontology stored on a computer-readable medium, at least one data-receiving device, and a processor configured to generate knowledge signatures by comparing datasets obtained by the data-receiving devices to the ontologies.

  11. Translating standards into practice - one Semantic Web API for Gene Expression.

    PubMed

    Deus, Helena F; Prud'hommeaux, Eric; Miller, Michael; Zhao, Jun; Malone, James; Adamusiak, Tomasz; McCusker, Jim; Das, Sudeshna; Rocca Serra, Philippe; Fox, Ronan; Marshall, M Scott

    2012-08-01

    Sharing and describing experimental results unambiguously with sufficient detail to enable replication of results is a fundamental tenet of scientific research. In today's cluttered world of "-omics" sciences, data standards and standardized use of terminologies and ontologies for biomedical informatics play an important role in reporting high-throughput experiment results in formats that can be interpreted by both researchers and analytical tools. Increasing adoption of Semantic Web and Linked Data technologies for the integration of heterogeneous and distributed health care and life sciences (HCLSs) datasets has made the reuse of standards even more pressing; dynamic semantic query federation can be used for integrative bioinformatics when ontologies and identifiers are reused across data instances. We present here a methodology to integrate the results and experimental context of three different representations of microarray-based transcriptomic experiments: the Gene Expression Atlas, the W3C BioRDF task force approach to reporting Provenance of Microarray Experiments, and the HSCI blood genomics project. Our approach does not attempt to improve the expressivity of existing standards for genomics but, instead, to enable integration of existing datasets published from microarray-based transcriptomic experiments. SPARQL Construct is used to create a posteriori mappings of concepts and properties and linking rules that match entities based on query constraints. We discuss how our integrative approach can encourage reuse of the Experimental Factor Ontology (EFO) and the Ontology for Biomedical Investigations (OBIs) for the reporting of experimental context and results of gene expression studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. XML, Ontologies, and Their Clinical Applications.

    PubMed

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  13. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs.

    PubMed

    Cameron, Delroy; Sheth, Amit P; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A

    2014-12-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and "intelligible constructs" not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving relevant documents when compared with three existing search systems.

  14. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs

    PubMed Central

    Cameron, Delroy; Sheth, Amit P.; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A.

    2015-01-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and “intelligible constructs” not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving relevant documents when compared with three existing search systems. PMID:25814917

  15. The Orthology Ontology: development and applications.

    PubMed

    Fernández-Breis, Jesualdo Tomás; Chiba, Hirokazu; Legaz-García, María Del Carmen; Uchiyama, Ikuo

    2016-06-04

    Computational comparative analysis of multiple genomes provides valuable opportunities to biomedical research. In particular, orthology analysis can play a central role in comparative genomics; it guides establishing evolutionary relations among genes of organisms and allows functional inference of gene products. However, the wide variations in current orthology databases necessitate the research toward the shareability of the content that is generated by different tools and stored in different structures. Exchanging the content with other research communities requires making the meaning of the content explicit. The need for a common ontology has led to the creation of the Orthology Ontology (ORTH) following the best practices in ontology construction. Here, we describe our model and major entities of the ontology that is implemented in the Web Ontology Language (OWL), followed by the assessment of the quality of the ontology and the application of the ORTH to existing orthology datasets. This shareable ontology enables the possibility to develop Linked Orthology Datasets and a meta-predictor of orthology through standardization for the representation of orthology databases. The ORTH is freely available in OWL format to all users at http://purl.org/net/orth . The Orthology Ontology can serve as a framework for the semantic standardization of orthology content and it will contribute to a better exploitation of orthology resources in biomedical research. The results demonstrate the feasibility of developing shareable datasets using this ontology. Further applications will maximize the usefulness of this ontology.

  16. GOGrapher: A Python library for GO graph representation and analysis.

    PubMed

    Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua

    2009-07-07

    The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.

  17. Time-related patient data retrieval for the case studies from the pharmacogenomics research network

    PubMed Central

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.

    2012-01-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712

  18. Time-related patient data retrieval for the case studies from the pharmacogenomics research network.

    PubMed

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G

    2012-11-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.

  19. On implementing clinical decision support: achieving scalability and maintainability by combining business rules and ontologies.

    PubMed

    Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya

    2006-01-01

    We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.

  20. Strategic Mobility 21: ICODES Extension Technical Plan

    DTIC Science & Technology

    2006-09-30

    re-planning tasks in a dynamically changing decision- making environment (Figure 2). The internal ontology is divided into logical domains that can...another web browser window opens. This window is divided into two sections. One section displays a graphical representation of the conveyance and its...referenced objects with attributes that can be reasoned about by the TRANSWAY agents. Context is provided by an internal ontology that is divided into

  1. An ontology for sensor networks

    NASA Astrophysics Data System (ADS)

    Compton, Michael; Neuhaus, Holger; Bermudez, Luis; Cox, Simon

    2010-05-01

    Sensors and networks of sensors are important ways of monitoring and digitizing reality. As the number and size of sensor networks grows, so too does the amount of data collected. Users of such networks typically need to discover the sensors and data that fit their needs without necessarily understanding the complexities of the network itself. The burden on users is eased if the network and its data are expressed in terms of concepts familiar to the users and their job functions, rather than in terms of the network or how it was designed. Furthermore, the task of collecting and combining data from multiple sensor networks is made easier if metadata about the data and the networks is stored in a format and conceptual models that is amenable to machine reasoning and inference. While the OGC's (Open Geospatial Consortium) SWE (Sensor Web Enablement) standards provide for the description and access to data and metadata for sensors, they do not provide facilities for abstraction, categorization, and reasoning consistent with standard technologies. Once sensors and networks are described using rich semantics (that is, by using logic to describe the sensors, the domain of interest, and the measurements) then reasoning and classification can be used to analyse and categorise data, relate measurements with similar information content, and manage, query and task sensors. This will enable types of automated processing and logical assurance built on OGC standards. The W3C SSN-XG (Semantic Sensor Networks Incubator Group) is producing a generic ontology to describe sensors, their environment and the measurements they make. The ontology provides definitions for the structure of sensors and observations, leaving the details of the observed domain unspecified. This allows abstract representations of real world entities, which are not observed directly but through their observable qualities. Domain semantics, units of measurement, time and time series, and location and mobility ontologies can be easily attached when instantiating the ontology for any particular sensors in a domain. After a review of previous work on the specification of sensors, the group is developing the ontology in conjunction with use case development. Part of the difficulty of such work is that relevant concepts from for example OGC standards and other ontologies must be identified and aligned and also placed in a consistent and logically correct way into the ontology. In terms of alignment with OGC's SWE, the ontology is intended to be able to model concepts from SensorML and O&M. Similar to SensorML and O&M, the ontology is based around concepts of systems, processes, and observations. It supports the description of the physical and processing structure of sensors. Sensors are not constrained to physical sensing devices: rather a sensor is anything that can estimate or calculate the value of a phenomenon, so a device or computational process or combination could play the role of a sensor. The representation of a sensor in the ontology links together what is measured (the domain phenomena), the sensor's physical and other properties and its functions and processing. Parts of the ontology are well aligned with SensorML and O&M, but parts are not, and the group is working to understand how differences from (and alignment with) the OGC standards affect the application of the ontology.

  2. Information Models, Data Requirements, and Agile Data Curation

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  3. Modeling functional Magnetic Resonance Imaging (fMRI) experimental variables in the Ontology of Experimental Variables and Values (OoEVV)

    PubMed Central

    Burns, Gully A.P.C.; Turner, Jessica A.

    2015-01-01

    Neuroimaging data is raw material for cognitive neuroscience experiments, leading to scientific knowledge about human neurological and psychological disease, language, perception, attention and ultimately, cognition. The structure of the variables used in the experimental design defines the structure of the data gathered in the experiments; this in turn structures the interpretative assertions that may be presented as experimental conclusions. Representing these assertions and the experimental data which support them in a computable way means that they could be used in logical reasoning environments, i.e. for automated meta-analyses, or linking hypotheses and results across different levels of neuroscientific experiments. Therefore, a crucial first step in being able to represent neuroimaging results in a clear, computable way is to develop representations for the scientific variables involved in neuroimaging experiments. These representations should be expressive, computable, valid, extensible, and easy-to-use. They should also leverage existing semantic standards to interoperate easily with other systems. We present an ontology design pattern called the Ontology of Experimental Variables and Values (OoEVV). This is designed to provide a lightweight framework to capture mathematical properties of data, with appropriate ‘hooks’ to permit linkage to other ontology-driven projects (such as the Ontology of Biomedical Investigations, OBI). We instantiate the OoEVV system with a small number of functional Magnetic Resonance Imaging datasets, to demonstrate the system’s ability to describe the variables of a neuroimaging experiment. OoEVV is designed to be compatible with the XCEDE neuroimaging data standard for data collection terminology, and with the Cognitive Paradigm Ontology (CogPO) for specific reasoning elements of neuroimaging experimental designs. PMID:23684873

  4. A DNA-based semantic fusion model for remote sensing data.

    PubMed

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology.

  5. A DNA-Based Semantic Fusion Model for Remote Sensing Data

    PubMed Central

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H.

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology. PMID:24116207

  6. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems at Australia's National Computing Infrastructure and show how one can 'click' through provenance interfaces to see the dual representations of the requests using provenance management tooling we have built.

  7. A Knowledge-based System for Intelligent Support in Pharmacogenomics Evidence Assessment: Ontology-driven Evidence Representation and Retrieval.

    PubMed

    Lee, Chia-Ju; Devine, Beth; Tarczy-Hornoch, Peter

    2017-01-01

    Pharmacogenomics holds promise as a critical component of precision medicine. Yet, the use of pharmacogenomics in routine clinical care is minimal, partly due to the lack of efficient and effective use of existing evidence. This paper describes the design, development, implementation and evaluation of a knowledge-based system that fulfills three critical features: a) providing clinically relevant evidence, b) applying an evidence-based approach, and c) using semantically computable formalism, to facilitate efficient evidence assessment to support timely decisions on adoption of pharmacogenomics in clinical care. To illustrate functionality, the system was piloted in the context of clopidogrel and warfarin pharmacogenomics. In contrast to existing pharmacogenomics knowledge bases, the developed system is the first to exploit the expressivity and reasoning power of logic-based representation formalism to enable unambiguous expression and automatic retrieval of pharmacogenomics evidence to support systematic review with meta-analysis.

  8. Evaluation of need for ontologies to manage domain content for the Reportable Conditions Knowledge Management System.

    PubMed

    Eilbeck, Karen L; Lipstein, Julie; McGarvey, Sunanda; Staes, Catherine J

    2014-01-01

    The Reportable Condition Knowledge Management System (RCKMS) is envisioned to be a single, comprehensive, authoritative, real-time portal to author, view and access computable information about reportable conditions. The system is designed for use by hospitals, laboratories, health information exchanges, and providers to meet public health reporting requirements. The RCKMS Knowledge Representation Workgroup was tasked to explore the need for ontologies to support RCKMS functionality. The workgroup reviewed relevant projects and defined criteria to evaluate candidate knowledge domain areas for ontology development. The use of ontologies is justified for this project to unify the semantics used to describe similar reportable events and concepts between different jurisdictions and over time, to aid data integration, and to manage large, unwieldy datasets that evolve, and are sometimes externally managed.

  9. Towards an ontological representation of morbidity and mortality in Description Logics.

    PubMed

    Santana, Filipe; Freitas, Fred; Fernandes, Roberta; Medeiros, Zulma; Schober, Daniel

    2012-09-21

    Despite the high coverage of biomedical ontologies, very few sound definitions of death can be found. Nevertheless, this concept has its relevance in epidemiology, such as for data integration within mortality notification systems. We here introduce an ontological representation of the complex biological qualities and processes that inhere in organisms transitioning from life to death. We further characterize them by causal processes and their temporal borders. Several representational difficulties were faced, mainly regarding kinds of processes with blurred or fiat borders that change their type in a continuous rather than discrete mode. Examples of such hard to grasp concepts are life, death and its relationships with injuries and diseases. We illustrate an iterative optimization of definitions within four versions of the ontology, so as to stress the typical problems encountered in representing complex biological processes. We point out possible solutions for representing concepts related to biological life cycles, preserving identity of participating individuals, i.e. for a patient in transition from life to death. This solution however required the use of extended description logics not yet supported by tools. We also focus on the interdependencies and need to change further parts if one part is changed. The axiomatic definition of mortality we introduce allows the description of biologic processes related to the transition from healthy to diseased or injured, and up to a final death state. Exploiting such definitions embedded into descriptions of pathogen transmissions by arthropod vectors, the complete sequence of infection and disease processes can be described, starting from the inoculation of a pathogen by a vector, until the death of an individual, preserving the identity of the patient.

  10. A Systematic Analysis of Term Reuse and Term Overlap across Biomedical Ontologies

    PubMed Central

    Kamdar, Maulik R.; Tudorache, Tania; Musen, Mark A.

    2016-01-01

    Reusing ontologies and their terms is a principle and best practice that most ontology development methodologies strongly encourage. Reuse comes with the promise to support the semantic interoperability and to reduce engineering costs. In this paper, we present a descriptive study of the current extent of term reuse and overlap among biomedical ontologies. We use the corpus of biomedical ontologies stored in the BioPortal repository, and analyze different types of reuse and overlap constructs. While we find an approximate term overlap between 25–31%, the term reuse is only <9%, with most ontologies reusing fewer than 5% of their terms from a small set of popular ontologies. Clustering analysis shows that the terms reused by a common set of ontologies have >90% semantic similarity, hinting that ontology developers tend to reuse terms that are sibling or parent–child nodes. We validate this finding by analysing the logs generated from a Protégé plugin that enables developers to reuse terms from BioPortal. We find most reuse constructs were 2-level subtrees on the higher levels of the class hierarchy. We developed a Web application that visualizes reuse dependencies and overlap among ontologies, and that proposes similar terms from BioPortal for a term of interest. We also identified a set of error patterns that indicate that ontology developers did intend to reuse terms from other ontologies, but that they were using different and sometimes incorrect representations. Our results stipulate the need for semi-automated tools that augment term reuse in the ontology engineering process through personalized recommendations. PMID:28819351

  11. PageMan: an interactive ontology tool to generate, display, and annotate overview graphs for profiling experiments.

    PubMed

    Usadel, Björn; Nagel, Axel; Steinhauser, Dirk; Gibon, Yves; Bläsing, Oliver E; Redestig, Henning; Sreenivasulu, Nese; Krall, Leonard; Hannah, Matthew A; Poree, Fabien; Fernie, Alisdair R; Stitt, Mark

    2006-12-18

    Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs.PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis.PageMan offers a complete user's guide, a web-based over-representation analysis as well as a tutorial, and is freely available at http://mapman.mpimp-golm.mpg.de/pageman/. PageMan allows multiple microarray experiments to be efficiently condensed into a single page graphical display. The flexible interface allows data to be quickly and easily visualized, facilitating comparisons within experiments and to published experiments, thus enabling researchers to gain a rapid overview of the biological responses in the experiments.

  12. A knowledge representation of local pandemic influenza planning models.

    PubMed

    Islam, Runa; Brandeau, Margaret L; Das, Amar K

    2007-10-11

    Planning for pandemic flu outbreak at the small-government level can be aided through the use of mathematical policy models. Formulating and analyzing policy models, however, can be a time- and expertise-expensive process. We believe that a knowledge-based system for facilitating the instantiation of locale- and problem-specific policy models can reduce some of these costs. In this work, we present the ontology we have developed for pandemic influenza policy models.

  13. The taboo against group contact: Hypothesis of Gypsy ontologization.

    PubMed

    Pérez, Juan A; Moscovici, Serge; Chulvi, Berta

    2007-06-01

    The concept of this article is that the symbolic relationships between human beings and animals serve as a model for the relationships between the majority and the ethnic minority. We postulate that there are two representations that serve to organize these relationships between human beings and animals: a domestic and a wild one. If the domestic animal is an index of human culture, the wild animal is an index of nature which man considers himself to share with the animal. With the wild representation, contact with the animal will be taboo, as it constitutes a threat to the anthropological difference. We offer the hypothesis that ontologization of the minority, that is, the substitution of a human category with an animal category, and thus its exclusion from the human species, is a method the majority use when the taboo against contact with the wild nature is necessary. Three experiments confirm the hypothesis that the Gypsy minority (as compared with the Gadje majority) is more ontologized when the context (a monkey or a clothed dog) threatens the anthropological differentiation of the Gadje participants.

  14. Ontology patterns for complex topographic feature yypes

    USGS Publications Warehouse

    Varanka, Dalia E.

    2011-01-01

    Complex feature types are defined as integrated relations between basic features for a shared meaning or concept. The shared semantic concept is difficult to define in commonly used geographic information systems (GIS) and remote sensing technologies. The role of spatial relations between complex feature parts was recognized in early GIS literature, but had limited representation in the feature or coverage data models of GIS. Spatial relations are more explicitly specified in semantic technology. In this paper, semantics for topographic feature ontology design patterns (ODP) are developed as data models for the representation of complex features. In the context of topographic processes, component assemblages are supported by resource systems and are found on local landscapes. The topographic ontology is organized across six thematic modules that can account for basic feature types, resource systems, and landscape types. Types of complex feature attributes include location, generative processes and physical description. Node/edge networks model standard spatial relations and relations specific to topographic science to represent complex features. To demonstrate these concepts, data from The National Map of the U. S. Geological Survey was converted and assembled into ODP.

  15. Clinical modeling--a critical analysis.

    PubMed

    Blobel, Bernd; Goossen, William; Brochhausen, Mathias

    2014-01-01

    Modeling clinical processes (and their informational representation) is a prerequisite for optimally enabling and supporting high quality and safe care through information and communication technology and meaningful use of gathered information. The paper investigates existing approaches to clinical modeling, thereby systematically analyzing the underlying principles, the consistency with and the integration opportunity to other existing or emerging projects, as well as the correctness of representing the reality of health and health services. The analysis is performed using an architectural framework for modeling real-world systems. In addition, fundamental work on the representation of facts, relations, and processes in the clinical domain by ontologies is applied, thereby including the integration of advanced methodologies such as translational and system medicine. The paper demonstrates fundamental weaknesses and different maturity as well as evolutionary potential in the approaches considered. It offers a development process starting with the business domain and its ontologies, continuing with the Reference Model-Open Distributed Processing (RM-ODP) related conceptual models in the ICT ontology space, the information and the computational view, and concluding with the implementation details represented as engineering and technology view, respectively. The existing approaches reflect at different levels the clinical domain, put the main focus on different phases of the development process instead of first establishing the real business process representation and therefore enable quite differently and partially limitedly the domain experts' involvement. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. The IHMC CmapTools software in research and education: a multi-level use case in Space Meteorology

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro

    2010-05-01

    The IHMC (Institute for Human and Machine Cognition, Florida University System, USA) CmapTools software is a powerful multi-platform tool for knowledge modelling in graphical form based on concept maps. In this work we present its application for the high-level development of a set of multi-level concept maps in the framework of Space Meteorology to act as the kernel of a space meteorology domain ontology. This is an example of a research use case, as a domain ontology coded in machine-readable form via e.g. OWL (Web Ontology Language) is suitable to be an active layer of any knowledge management system embedded in a Virtual Observatory (VO). Apart from being manageable at machine level, concept maps developed via CmapTools are intrinsically human-readable and can embed hyperlinks and objects of many kinds. Therefore they are suitable to be published on the web: the coded knowledge can be exploited for educational purposes by the students and the public, as the level of information can be naturally organized among linked concept maps in progressively increasing complexity levels. Hence CmapTools and its advanced version COE (Concept-map Ontology Editor) represent effective and user-friendly software tools for high-level knowledge represention in research and education.

  17. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes

    PubMed Central

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-01-01

    Introduction The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Methods Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. Results We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. Conclusions We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. PMID:25670753

  18. CDAO-Store: Ontology-driven Data Integration for Phylogenetic Analysis

    PubMed Central

    2011-01-01

    Background The Comparative Data Analysis Ontology (CDAO) is an ontology developed, as part of the EvoInfo and EvoIO groups supported by the National Evolutionary Synthesis Center, to provide semantic descriptions of data and transformations commonly found in the domain of phylogenetic analysis. The core concepts of the ontology enable the description of phylogenetic trees and associated character data matrices. Results Using CDAO as the semantic back-end, we developed a triple-store, named CDAO-Store. CDAO-Store is a RDF-based store of phylogenetic data, including a complete import of TreeBASE. CDAO-Store provides a programmatic interface, in the form of web services, and a web-based front-end, to perform both user-defined as well as domain-specific queries; domain-specific queries include search for nearest common ancestors, minimum spanning clades, filter multiple trees in the store by size, author, taxa, tree identifier, algorithm or method. In addition, CDAO-Store provides a visualization front-end, called CDAO-Explorer, which can be used to view both character data matrices and trees extracted from the CDAO-Store. CDAO-Store provides import capabilities, enabling the addition of new data to the triple-store; files in PHYLIP, MEGA, nexml, and NEXUS formats can be imported and their CDAO representations added to the triple-store. Conclusions CDAO-Store is made up of a versatile and integrated set of tools to support phylogenetic analysis. To the best of our knowledge, CDAO-Store is the first semantically-aware repository of phylogenetic data with domain-specific querying capabilities. The portal to CDAO-Store is available at http://www.cs.nmsu.edu/~cdaostore. PMID:21496247

  19. CDAO-store: ontology-driven data integration for phylogenetic analysis.

    PubMed

    Chisham, Brandon; Wright, Ben; Le, Trung; Son, Tran Cao; Pontelli, Enrico

    2011-04-15

    The Comparative Data Analysis Ontology (CDAO) is an ontology developed, as part of the EvoInfo and EvoIO groups supported by the National Evolutionary Synthesis Center, to provide semantic descriptions of data and transformations commonly found in the domain of phylogenetic analysis. The core concepts of the ontology enable the description of phylogenetic trees and associated character data matrices. Using CDAO as the semantic back-end, we developed a triple-store, named CDAO-Store. CDAO-Store is a RDF-based store of phylogenetic data, including a complete import of TreeBASE. CDAO-Store provides a programmatic interface, in the form of web services, and a web-based front-end, to perform both user-defined as well as domain-specific queries; domain-specific queries include search for nearest common ancestors, minimum spanning clades, filter multiple trees in the store by size, author, taxa, tree identifier, algorithm or method. In addition, CDAO-Store provides a visualization front-end, called CDAO-Explorer, which can be used to view both character data matrices and trees extracted from the CDAO-Store. CDAO-Store provides import capabilities, enabling the addition of new data to the triple-store; files in PHYLIP, MEGA, nexml, and NEXUS formats can be imported and their CDAO representations added to the triple-store. CDAO-Store is made up of a versatile and integrated set of tools to support phylogenetic analysis. To the best of our knowledge, CDAO-Store is the first semantically-aware repository of phylogenetic data with domain-specific querying capabilities. The portal to CDAO-Store is available at http://www.cs.nmsu.edu/~cdaostore.

  20. FleXConf: A Flexible Conference Assistant Using Context-Aware Notification Services

    NASA Astrophysics Data System (ADS)

    Armenatzoglou, Nikos; Marketakis, Yannis; Kriara, Lito; Apostolopoulos, Elias; Papavasiliou, Vicky; Kampas, Dimitris; Kapravelos, Alexandros; Kartsonakis, Eythimis; Linardakis, Giorgos; Nikitaki, Sofia; Bikakis, Antonis; Antoniou, Grigoris

    Integrating context-aware notification services to ubiquitous computing systems aims at the provision of the right information to the right users, at the right time, in the right place, and on the right device, and constitutes a significant step towards the realization of the Ambient Intelligence vision. In this paper, we present FlexConf, a semantics-based system that supports location-based, personalized notification services for the assistance of conference attendees. Its special features include an ontology-based representation model, rule-based context-aware reasoning, and a novel positioning system for indoor environments.

  1. Materiality, Technology, and Constructing Social Knowledge through Bodily Representation: A View from Prehistoric Guernsey, Channel Islands

    PubMed Central

    Kohring, Sheila

    2015-01-01

    The role of the human body in the creation of social knowledge—as an ontological and/or aesthetic category—has been applied across social theory. In all these approaches, the body is viewed as a locus for experience and knowledge. If the body is a source of subjective knowledge, then it can also become an important means of creating ontological categories of self and society. The materiality of human representations within art traditions, then, can be interpreted as providing a means for contextualizing and aestheticizing the body in order to produce a symbolic and structural knowledge category. This paper explores the effect of material choices and techniques of production when representing the human body on how societies order and categorize the world. PMID:26290654

  2. Materiality, Technology, and Constructing Social Knowledge through Bodily Representation: A View from Prehistoric Guernsey, Channel Islands.

    PubMed

    Kohring, Sheila

    2015-04-22

    The role of the human body in the creation of social knowledge-as an ontological and/or aesthetic category-has been applied across social theory. In all these approaches, the body is viewed as a locus for experience and knowledge. If the body is a source of subjective knowledge, then it can also become an important means of creating ontological categories of self and society. The materiality of human representations within art traditions, then, can be interpreted as providing a means for contextualizing and aestheticizing the body in order to produce a symbolic and structural knowledge category. This paper explores the effect of material choices and techniques of production when representing the human body on how societies order and categorize the world.

  3. Analysis and visualization of disease courses in a semantically-enabled cancer registry.

    PubMed

    Esteban-Gil, Angel; Fernández-Breis, Jesualdo Tomás; Boeker, Martin

    2017-09-29

    Regional and epidemiological cancer registries are important for cancer research and the quality management of cancer treatment. Many technological solutions are available to collect and analyse data for cancer registries nowadays. However, the lack of a well-defined common semantic model is a problem when user-defined analyses and data linking to external resources are required. The objectives of this study are: (1) design of a semantic model for local cancer registries; (2) development of a semantically-enabled cancer registry based on this model; and (3) semantic exploitation of the cancer registry for analysing and visualising disease courses. Our proposal is based on our previous results and experience working with semantic technologies. Data stored in a cancer registry database were transformed into RDF employing a process driven by OWL ontologies. The semantic representation of the data was then processed to extract semantic patient profiles, which were exploited by means of SPARQL queries to identify groups of similar patients and to analyse the disease timelines of patients. Based on the requirements analysis, we have produced a draft of an ontology that models the semantics of a local cancer registry in a pragmatic extensible way. We have implemented a Semantic Web platform that allows transforming and storing data from cancer registries in RDF. This platform also permits users to formulate incremental user-defined queries through a graphical user interface. The query results can be displayed in several customisable ways. The complex disease timelines of individual patients can be clearly represented. Different events, e.g. different therapies and disease courses, are presented according to their temporal and causal relations. The presented platform is an example of the parallel development of ontologies and applications that take advantage of semantic web technologies in the medical field. The semantic structure of the representation renders it easy to analyse key figures of the patients and their evolution at different granularity levels.

  4. Evaluation of need for ontologies to manage domain content for the Reportable Conditions Knowledge Management System

    PubMed Central

    Eilbeck, Karen L.; Lipstein, Julie; McGarvey, Sunanda; Staes, Catherine J.

    2014-01-01

    The Reportable Condition Knowledge Management System (RCKMS) is envisioned to be a single, comprehensive, authoritative, real-time portal to author, view and access computable information about reportable conditions. The system is designed for use by hospitals, laboratories, health information exchanges, and providers to meet public health reporting requirements. The RCKMS Knowledge Representation Workgroup was tasked to explore the need for ontologies to support RCKMS functionality. The workgroup reviewed relevant projects and defined criteria to evaluate candidate knowledge domain areas for ontology development. The use of ontologies is justified for this project to unify the semantics used to describe similar reportable events and concepts between different jurisdictions and over time, to aid data integration, and to manage large, unwieldy datasets that evolve, and are sometimes externally managed. PMID:25954354

  5. GOGrapher: A Python library for GO graph representation and analysis

    PubMed Central

    Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua

    2009-01-01

    Background The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. Findings An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. Conclusion The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve. PMID:19583843

  6. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  7. GalenOWL: Ontology-based drug recommendations discovery

    PubMed Central

    2012-01-01

    Background Identification of drug-drug and drug-diseases interactions can pose a difficult problem to cope with, as the increasingly large number of available drugs coupled with the ongoing research activities in the pharmaceutical domain, make the task of discovering relevant information difficult. Although international standards, such as the ICD-10 classification and the UNII registration, have been developed in order to enable efficient knowledge sharing, medical staff needs to be constantly updated in order to effectively discover drug interactions before prescription. The use of Semantic Web technologies has been proposed in earlier works, in order to tackle this problem. Results This work presents a semantic-enabled online service, named GalenOWL, capable of offering real time drug-drug and drug-diseases interaction discovery. For enabling this kind of service, medical information and terminology had to be translated to ontological terms and be appropriately coupled with medical knowledge of the field. International standards such as the aforementioned ICD-10 and UNII, provide the backbone of the common representation of medical data, while the medical knowledge of drug interactions is represented by a rule base which makes use of the aforementioned standards. Details of the system architecture are presented while also giving an outline of the difficulties that had to be overcome. A comparison of the developed ontology-based system with a similar system developed using a traditional business logic rule engine is performed, giving insights on the advantages and drawbacks of both implementations. Conclusions The use of Semantic Web technologies has been found to be a good match for developing drug recommendation systems. Ontologies can effectively encapsulate medical knowledge and rule-based reasoning can capture and encode the drug interactions knowledge. PMID:23256945

  8. Automated Ontology Generation Using Spatial Reasoning

    NASA Astrophysics Data System (ADS)

    Coalter, Alton; Leopold, Jennifer L.

    Recently there has been much interest in using ontologies to facilitate knowledge representation, integration, and reasoning. Correspondingly, the extent of the information embodied by an ontology is increasing beyond the conventional is_a and part_of relationships. To address these requirements, a vast amount of digitally available information may need to be considered when building ontologies, prompting a desire for software tools to automate at least part of the process. The main efforts in this direction have involved textual information retrieval and extraction methods. For some domains extension of the basic relationships could be enhanced further by the analysis of 2D and/or 3D images. For this type of media, image processing algorithms are more appropriate than textual analysis methods. Herein we present an algorithm that, given a collection of 3D image files, utilizes Qualitative Spatial Reasoning (QSR) to automate the creation of an ontology for the objects represented by the images, relating the objects in terms of is_a and part_of relationships and also through unambiguous Relational Connection Calculus (RCC) relations.

  9. Lessons learned in deploying a cloud-based knowledge platform for the Earth Science Information Partners Federation (ESIP)

    NASA Astrophysics Data System (ADS)

    Pouchard, L. C.; Depriest, A.; Huhns, M.

    2012-12-01

    Ontologies and semantic technologies are an essential infrastructure component of systems supporting knowledge integration in the Earth Sciences. Numerous earth science ontologies exist, but are hard to discover because they tend to be hosted with the projects that develop them. There are often few quality measures and sparse metadata associated with these ontologies, such as modification dates, versioning, purpose, number of classes, and properties. Projects often develop ontologies for their own needs without considering existing ontology entities or derivations from formal and more basic ontologies. The result is mostly orthogonal ontologies, and ontologies that are not modular enough to reuse in part or adapt for new purposes, in spite of existing, standards for ontology representation. Additional obstacles to sharing and reuse include a lack of maintenance once a project is completed. The obstacles prevent the full exploitation of semantic technologies in a context where they could become needed enablers for service discovery and for matching data with services. To start addressing this gap, we have deployed BioPortal, a mature, domain-independent ontology and semantic service system developed by the National Center for Biomedical Ontologies (NCBO), on the ESIP Testbed under the governance of the ESIP Semantic Web cluster. ESIP provides a forum for a broad-based, distributed community of data and information technology practitioners and stakeholders to coordinate their efforts and develop new ideas for interoperability solutions. The Testbed provides an environment where innovations and best practices can be explored and evaluated. One objective of this deployment is to provide a community platform that would harness the organizational and cyber infrastructure provided by ESIP at minimal costs. Another objective is to host ontology services on a scalable, public cloud and investigate the business case for crowd sourcing of ontology maintenance. We deployed the system on Amazon 's Elastic Compute Cloud (EC2) where ESIP maintains an account. Our approach had three phases: 1) set up a private cloud environment at the University of South Carolina to become familiar with the complex architecture of the system and enable some basic customization, 2) coordinate the production of a Virtual Appliance for the system with NCBO and deploy it on the Amazon cloud, and 3) outreach to the ESIP community to solicit participation, populate the repository, and develop new use cases. Phase 2 is nearing completion and Phase 3 is underway. Ontologies were gathered during updates to the ESIP cluster. Discussion points included the criteria for a shareable ontology and how to determine the best size for an ontology to be reusable. Outreach highlighted that the system can start addressing an integration of discovery frameworks via linking data and services in a pull model (data and service casting), a key issue of the Discovery cluster. This work thus presents several contributions: 1) technology injection from another domain into the earth sciences, 2) the deployment of a mature knowledge platform on the EC2 cloud, and 3) the successful engagement of the community through the ESIP clusters and Testbed model.

  10. Digital Workflows for a 3d Semantic Representation of AN Ancient Mining Landscape

    NASA Astrophysics Data System (ADS)

    Hiebel, G.; Hanke, K.

    2017-08-01

    The ancient mining landscape of Schwaz/Brixlegg in the Tyrol, Austria witnessed mining from prehistoric times to modern times creating a first order cultural landscape when it comes to one of the most important inventions in human history: the production of metal. In 1991 a part of this landscape was lost due to an enormous landslide that reshaped part of the mountain. With our work we want to propose a digital workflow to create a 3D semantic representation of this ancient mining landscape with its mining structures to preserve it for posterity. First, we define a conceptual model to integrate the data. It is based on the CIDOC CRM ontology and CRMgeo for geometric data. To transform our information sources to a formal representation of the classes and properties of the ontology we applied semantic web technologies and created a knowledge graph in RDF (Resource Description Framework). Through the CRMgeo extension coordinate information of mining features can be integrated into the RDF graph and thus related to the detailed digital elevation model that may be visualized together with the mining structures using Geoinformation systems or 3D visualization tools. The RDF network of the triple store can be queried using the SPARQL query language. We created a snapshot of mining, settlement and burial sites in the Bronze Age. The results of the query were loaded into a Geoinformation system and a visualization of known bronze age sites related to mining, settlement and burial activities was created.

  11. The First Organ-Based Ontology for Arthropods (Ontology of Arthropod Circulatory Systems - OArCS) and its Integration into a Novel Formalization Scheme for Morphological Descriptions.

    PubMed

    Wirkner, Christian S; Göpel, Torben; Runge, Jens; Keiler, Jonas; Klussmann-Fricke, Bastian-Jesper; Huckstorf, Katarina; Scholz, Stephan; Mikó, István; J Yoder, Matthew; Richter, Stefan

    2017-09-01

    Morphology, the oldest discipline in the biosciences, is currently experiencing a renaissance in the field of comparative phenomics. However, morphological/phenotypic research still suffers on various levels from a lack of standards. This shortcoming, first highlighted as the "linguistic problem of morphology", concerns the usage of terminology and also the need for formalization of morphological descriptions themselves, something of paramount importance not only to the field of morphology but also when it comes to the use of phenotypic data in systematics and evolutionary biology. We therefore argue, that for morphological descriptions, the basis of all systematic and evolutionary interpretations, ontologies need to be utilized which are based exclusively on structural qualities/properties and which in no case include statements about homology and/or function. Statements about homology and function constitute interpretations on a different or higher level. Based on these "anatomy ontologies", further ontological dimensions (e.g., referring to functional properties or homology) may be exerted for a broad use in evolutionary phenomics. To this end we present the first organ-based ontology for the most species-rich animal group, the Arthropoda. Our Ontology of Arthropod Circulatory Systems (OArCS) contains a comprehensive collection of 383 terms (i.e., labels) tied to 296 concepts (i.e., definitions) collected from the literature on phenotypic aspects of circulatory organ features in arthropods. All of the concepts used in OArCS are based exclusively on structural features, and in the context of the ontology are independent of homology and functional assumptions. We cannot rule out that in some cases, terms are used which in traditional usage and previous accounts might have implied homology and/or function (e.g. heart, sternal artery). Concepts are composed of descriptive elements that are used to classify observed instances into the organizational framework of the ontology. That is, descriptions in ontologies are only descriptions of individuals if they are necessary/and or sufficient representations of attributes (independently) observed and recorded for an individual. In addition, we here present for the first time an entirely new approach to formalizing phenotypic research, a semantic model for the description of a complex organ system in a highly disparate taxon, the arthropods. We demonstrate this with a formalized morphological description of the hemolymph vascular system in one specimen of the European garden spider Araneus diadematus. Our description targets five categories of descriptive statement: "position", "spatial relationships", "shape", "constituents", and "connections", as the corresponding formalizations constitute exemplary patterns useful not only when talking about the circulatory system, but also in descriptions in general. The downstream applications of computer-parsable morphological descriptions are widespread, with their core utility being the fact that they make it possible to compare collective description sets in computational time, that is, very quickly. Among other things, this facilitates the identification of phenotypic plasticity and variation when single individuals are compared, the identification of those traits which correlate between and within taxa, and the identification of links between morphological traits and genetic (using GO, Gene Ontology) or environmental (using ENVO, Environmental Ontology) factors. [Arthropoda; concept; function; hemolymph vascular system; homology; terminology.]. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  13. Representing Human Expertise by the OWL Web Ontology Language to Support Knowledge Engineering in Decision Support Systems.

    PubMed

    Ramzan, Asia; Wang, Hai; Buckingham, Christopher

    2014-01-01

    Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.

  14. Development of an Ontology to Model Medical Errors, Information Needs, and the Clinical Communication Space

    PubMed Central

    Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.

    2002-01-01

    Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.

  15. Inferring the semantic relationships of words within an ontology using random indexing: applications to pharmacogenomics.

    PubMed

    Percha, Bethany; Altman, Russ B

    2013-01-01

    The biomedical literature presents a uniquely challenging text mining problem. Sentences are long and complex, the subject matter is highly specialized with a distinct vocabulary, and producing annotated training data for this domain is time consuming and expensive. In this environment, unsupervised text mining methods that do not rely on annotated training data are valuable. Here we investigate the use of random indexing, an automated method for producing vector-space semantic representations of words from large, unlabeled corpora, to address the problem of term normalization in sentences describing drugs and genes. We show that random indexing produces similarity scores that capture some of the structure of PHARE, a manually curated ontology of pharmacogenomics concepts. We further show that random indexing can be used to identify likely word candidates for inclusion in the ontology, and can help localize these new labels among classes and roles within the ontology.

  16. Inferring the semantic relationships of words within an ontology using random indexing: applications to pharmacogenomics

    PubMed Central

    Percha, Bethany; Altman, Russ B.

    2013-01-01

    The biomedical literature presents a uniquely challenging text mining problem. Sentences are long and complex, the subject matter is highly specialized with a distinct vocabulary, and producing annotated training data for this domain is time consuming and expensive. In this environment, unsupervised text mining methods that do not rely on annotated training data are valuable. Here we investigate the use of random indexing, an automated method for producing vector-space semantic representations of words from large, unlabeled corpora, to address the problem of term normalization in sentences describing drugs and genes. We show that random indexing produces similarity scores that capture some of the structure of PHARE, a manually curated ontology of pharmacogenomics concepts. We further show that random indexing can be used to identify likely word candidates for inclusion in the ontology, and can help localize these new labels among classes and roles within the ontology. PMID:24551397

  17. Computable visually observed phenotype ontological framework for plants

    PubMed Central

    2011-01-01

    Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966

  18. Method of transition from 3D model to its ontological representation in aircraft design process

    NASA Astrophysics Data System (ADS)

    Govorkov, A. S.; Zhilyaev, A. S.; Fokin, I. V.

    2018-05-01

    This paper proposes the method of transition from a 3D model to its ontological representation and describes its usage in the aircraft design process. The problems of design for manufacturability and design automation are also discussed. The introduced method is to aim to ease the process of data exchange between important aircraft design phases, namely engineering and design control. The method is also intended to increase design speed and 3D model customizability. This requires careful selection of the complex systems (CAD / CAM / CAE / PDM), providing the basis for the integration of design and technological preparation of production and more fully take into account the characteristics of products and processes for their manufacture. It is important to solve this problem, as investment in the automation define the company's competitiveness in the years ahead.

  19. Ontological knowledge engine and health screening data enabled ubiquitous personalized physical fitness (UFIT).

    PubMed

    Su, Chuan-Jun; Chiang, Chang-Yu; Chih, Meng-Chun

    2014-03-07

    Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs.

  20. Ontological Knowledge Engine and Health Screening Data Enabled Ubiquitous Personalized Physical Fitness (UFIT)

    PubMed Central

    Su, Chuan-Jun; Chiang, Chang-Yu; Chih, Meng-Chun

    2014-01-01

    Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs. PMID:24608002

  1. A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.

    PubMed

    Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-10-08

    Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting," and "treatments for liver cancer") was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks.

  2. Standard model of knowledge representation

    NASA Astrophysics Data System (ADS)

    Yin, Wensheng

    2016-09-01

    Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.

  3. Disease Ontology 2015 update: an expanded and updated database of human diseases for linking biomedical knowledge through disease data

    PubMed Central

    Kibbe, Warren A.; Arze, Cesar; Felix, Victor; Mitraka, Elvira; Bolton, Evan; Fu, Gang; Mungall, Christopher J.; Binder, Janos X.; Malone, James; Vasant, Drashtti; Parkinson, Helen; Schriml, Lynn M.

    2015-01-01

    The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years. These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. This will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning. PMID:25348409

  4. Application of neuroanatomical ontologies for neuroimaging data annotation.

    PubMed

    Turner, Jessica A; Mejino, Jose L V; Brinkley, James F; Detwiler, Landon T; Lee, Hyo Jong; Martone, Maryann E; Rubin, Daniel L

    2010-01-01

    The annotation of functional neuroimaging results for data sharing and re-use is particularly challenging, due to the diversity of terminologies of neuroanatomical structures and cortical parcellation schemes. To address this challenge, we extended the Foundational Model of Anatomy Ontology (FMA) to include cytoarchitectural, Brodmann area labels, and a morphological cortical labeling scheme (e.g., the part of Brodmann area 6 in the left precentral gyrus). This representation was also used to augment the neuroanatomical axis of RadLex, the ontology for clinical imaging. The resulting neuroanatomical ontology contains explicit relationships indicating which brain regions are "part of" which other regions, across cytoarchitectural and morphological labeling schemas. We annotated a large functional neuroimaging dataset with terms from the ontology and applied a reasoning engine to analyze this dataset in conjunction with the ontology, and achieved successful inferences from the most specific level (e.g., how many subjects showed activation in a subpart of the middle frontal gyrus) to more general (how many activations were found in areas connected via a known white matter tract?). In summary, we have produced a neuroanatomical ontology that harmonizes several different terminologies of neuroanatomical structures and cortical parcellation schemes. This neuroanatomical ontology is publicly available as a view of FMA at the Bioportal website. The ontological encoding of anatomic knowledge can be exploited by computer reasoning engines to make inferences about neuroanatomical relationships described in imaging datasets using different terminologies. This approach could ultimately enable knowledge discovery from large, distributed fMRI studies or medical record mining.

  5. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes.

    PubMed

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-05-01

    The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Protein classification using probabilistic chain graphs and the Gene Ontology structure.

    PubMed

    Carroll, Steven; Pavlovic, Vladimir

    2006-08-01

    Probabilistic graphical models have been developed in the past for the task of protein classification. In many cases, classifications obtained from the Gene Ontology have been used to validate these models. In this work we directly incorporate the structure of the Gene Ontology into the graphical representation for protein classification. We present a method in which each protein is represented by a replicate of the Gene Ontology structure, effectively modeling each protein in its own 'annotation space'. Proteins are also connected to one another according to different measures of functional similarity, after which belief propagation is run to make predictions at all ontology terms. The proposed method was evaluated on a set of 4879 proteins from the Saccharomyces Genome Database whose interactions were also recorded in the GRID project. Results indicate that direct utilization of the Gene Ontology improves predictive ability, outperforming traditional models that do not take advantage of dependencies among functional terms. Average increase in accuracy (precision) of positive and negative term predictions of 27.8% (2.0%) over three different similarity measures and three subontologies was observed. C/C++/Perl implementation is available from authors upon request.

  7. Nuclear Nonproliferation Ontology Assessment Team Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strasburg, Jana D.; Hohimer, Ryan E.

    Final Report for the NA22 Simulations, Algorithm and Modeling (SAM) Ontology Assessment Team's efforts from FY09-FY11. The Ontology Assessment Team began in May 2009 and concluded in September 2011. During this two-year time frame, the Ontology Assessment team had two objectives: (1) Assessing the utility of knowledge representation and semantic technologies for addressing nuclear nonproliferation challenges; and (2) Developing ontological support tools that would provide a framework for integrating across the Simulation, Algorithm and Modeling (SAM) program. The SAM Program was going through a large assessment and strategic planning effort during this time and as a result, the relative importancemore » of these two objectives changed, altering the focus of the Ontology Assessment Team. In the end, the team conducted an assessment of the state of art, created an annotated bibliography, and developed a series of ontological support tools, demonstrations and presentations. A total of more than 35 individuals from 12 different research institutions participated in the Ontology Assessment Team. These included subject matter experts in several nuclear nonproliferation-related domains as well as experts in semantic technologies. Despite the diverse backgrounds and perspectives, the Ontology Assessment team functioned very well together and aspects could serve as a model for future inter-laboratory collaborations and working groups. While the team encountered several challenges and learned many lessons along the way, the Ontology Assessment effort was ultimately a success that led to several multi-lab research projects and opened up a new area of scientific exploration within the Office of Nuclear Nonproliferation and Verification.« less

  8. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the ontology. Finally, we implement a prototype of the practice transformation ontology using Protégé. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The Apollo Structured Vocabulary: an OWL2 ontology of phenomena in infectious disease epidemiology and population biology for use in epidemic simulation.

    PubMed

    Hogan, William R; Wagner, Michael M; Brochhausen, Mathias; Levander, John; Brown, Shawn T; Millett, Nicholas; DePasse, Jay; Hanna, Josh

    2016-08-18

    We developed the Apollo Structured Vocabulary (Apollo-SV)-an OWL2 ontology of phenomena in infectious disease epidemiology and population biology-as part of a project whose goal is to increase the use of epidemic simulators in public health practice. Apollo-SV defines a terminology for use in simulator configuration. Apollo-SV is the product of an ontological analysis of the domain of infectious disease epidemiology, with particular attention to the inputs and outputs of nine simulators. Apollo-SV contains 802 classes for representing the inputs and outputs of simulators, of which approximately half are new and half are imported from existing ontologies. The most important Apollo-SV class for users of simulators is infectious disease scenario, which is a representation of an ecosystem at simulator time zero that has at least one infection process (a class) affecting at least one population (also a class). Other important classes represent ecosystem elements (e.g., households), ecosystem processes (e.g., infection acquisition and infectious disease), censuses of ecosystem elements (e.g., censuses of populations), and infectious disease control measures. In the larger project, which created an end-user application that can send the same infectious disease scenario to multiple simulators, Apollo-SV serves as the controlled terminology and strongly influences the design of the message syntax used to represent an infectious disease scenario. As we added simulators for different pathogens (e.g., malaria and dengue), the core classes of Apollo-SV have remained stable, suggesting that our conceptualization of the information required by simulators is sound. Despite adhering to the OBO Foundry principle of orthogonality, we could not reuse Infectious Disease Ontology classes as the basis for infectious disease scenarios. We thus defined new classes in Apollo-SV for host, pathogen, infection, infectious disease, colonization, and infection acquisition. Unlike IDO, our ontological analysis extended to existing mathematical models of key biological phenomena studied by infectious disease epidemiology and population biology. Our ontological analysis as expressed in Apollo-SV was instrumental in developing a simulator-independent representation of infectious disease scenarios that can be run on multiple epidemic simulators. Our experience suggests the importance of extending ontological analysis of a domain to include existing mathematical models of the phenomena studied by the domain. Apollo-SV is freely available at: http://purl.obolibrary.org/obo/apollo_sv.owl .

  10. Quantitative imaging biomarker ontology (QIBO) for knowledge representation of biomedical imaging biomarkers.

    PubMed

    Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David

    2013-08-01

    A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.

  11. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    PubMed

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  12. Extending Primitive Spatial Data Models to Include Semantics

    NASA Astrophysics Data System (ADS)

    Reitsma, F.; Batcheller, J.

    2009-04-01

    Our traditional geospatial data model involves associating some measurable quality, such as temperature, or observable feature, such as a tree, with a point or region in space and time. When capturing data we implicitly subscribe to some kind of conceptualisation. If we can make this explicit in an ontology and associate it with the captured data, we can leverage formal semantics to reason with the concepts represented in our spatial data sets. To do so, we extend our fundamental representation of geospatial data in a data model by including a URI in our basic data model that links it to our ontology defining our conceptualisation, We thus extend Goodchild et al's geo-atom [1] with the addition of a URI: (x, Z, z(x), URI) . This provides us with pixel or feature level knowledge and the ability to create layers of data from a set of pixels or features that might be drawn from a database based on their semantics. Using open source tools, we present a prototype that involves simple reasoning as a proof of concept. References [1] M.F. Goodchild, M. Yuan, and T.J. Cova. Towards a general theory of geographic representation in gis. International Journal of Geographical Information Science, 21(3):239-260, 2007.

  13. Galen: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    1999-01-01

    GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.

  14. Knowledge representation and management: towards an integration of a semantic web in daily health practice.

    PubMed

    Griffon, N; Charlet, J; Darmoni, Sj

    2013-01-01

    To summarize the best papers in the field of Knowledge Representation and Management (KRM). A synopsis of the four selected articles for the IMIA Yearbook 2013 KRM section is provided, as well as highlights of current KRM trends, in particular, of the semantic web in daily health practice. The manual selection was performed in three stages: first a set of 3,106 articles, then a second set of 86 articles followed by a third set of 15 articles, and finally the last set of four chosen articles. Among the four selected articles (see Table 1), one focuses on knowledge engineering to prevent adverse drug events; the objective of the second is to propose mappings between clinical archetypes and SNOMED CT in the context of clinical practice; the third presents an ontology to create a question-answering system; the fourth describes a biomonitoring network based on semantic web technologies. These four articles clearly indicate that the health semantic web has become a part of daily practice of health professionals since 2012. In the review of the second set of 86 articles, the same topics included in the previous IMIA yearbook remain active research fields: Knowledge extraction, automatic indexing, information retrieval, natural language processing, management of health terminologies and ontologies.

  15. The Semantic Web: From Representation to Realization

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.

    A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.

  16. Clinical Documents: Attribute-Values Entity Representation, Context, Page Layout And Communication

    PubMed Central

    Lovis, Christian; Lamb, Alexander; Baud, Robert; Rassinoux, Anne-Marie; Fabry, Paul; Geissbühler, Antoine

    2003-01-01

    This paper presents how acquisition, storage and communication of clinical documents are implemented at the University Hospitals of Geneva. Careful attention has been given to user-interfaces, in order to support complex layouts, spell checking, templates management with automatic prefilling in order to facilitate acquisition. A dual architecture has been developed for storage using an attributes-values entity unified database and a consolidated, patient-centered, layout-respectful files-based storage, providing both representation power and sinsert (peed of accesses. This architecture allows great flexibility to store a continuum of data types from simple type values up to complex clinical reports. Finally, communication is entirely based on HTTP-XML internally and a HL-7 CDA interface V2 is currently studied for external communication. Some of the problem encountered, mostly concerning the typology of documents and the ontology of clinical attributes are evoked. PMID:14728202

  17. Sharing Epigraphic Information as Linked Data

    NASA Astrophysics Data System (ADS)

    Álvarez, Fernando-Luis; García-Barriocanal, Elena; Gómez-Pantoja, Joaquín-L.

    The diffusion of epigraphic data has evolved in the last years from printed catalogues to indexed digital databases shared through the Web. Recently, the open EpiDoc specifications have resulted in an XML-based schema for the interchange of ancient texts that uses XSLT to render typographic representations. However, these schemas and representation systems are still not providing a way to encode computational semantics and semantic relations between pieces of epigraphic data. This paper sketches an approach to bring these semantics into an EpiDoc based schema using the Ontology Web Language (OWL) and following the principles and methods of information sharing known as "linked data". The paper describes the general principles of the OWL mapping of the EpiDoc schema and how epigraphic data can be shared in RDF format via dereferenceable URIs that can be used to build advanced search, visualization and analysis systems.

  18. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities

    PubMed Central

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  19. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    PubMed

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  20. Disease Ontology 2015 update: an expanded and updated database of human diseases for linking biomedical knowledge through disease data.

    PubMed

    Kibbe, Warren A; Arze, Cesar; Felix, Victor; Mitraka, Elvira; Bolton, Evan; Fu, Gang; Mungall, Christopher J; Binder, Janos X; Malone, James; Vasant, Drashtti; Parkinson, Helen; Schriml, Lynn M

    2015-01-01

    The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years. These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. This will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Disease Ontology 2015 update: An expanded and updated database of human diseases for linking biomedical knowledge through disease data

    DOE PAGES

    Kibbe, Warren A.; Arze, Cesar; Felix, Victor; ...

    2014-10-27

    The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years.more » These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. In conclusion, this will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning.« less

  2. Disease Ontology 2015 update: An expanded and updated database of human diseases for linking biomedical knowledge through disease data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kibbe, Warren A.; Arze, Cesar; Felix, Victor

    The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years.more » These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. In conclusion, this will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning.« less

  3. Constructivism and the Epistemic Object.

    ERIC Educational Resources Information Center

    Lewin, Philip

    Investigators influenced by Anglo-American epistemology have frequently misinterpreted Piaget's genetic constructivism as an empirical psychology, seeing knowledge acquisition as a process in which representations of the world come into increasingly close correspondence with an ontologically unproblematic external reality. Both this interpretation…

  4. Extending the evaluation of Genia Event task toward knowledge base construction and comparison to Gene Regulation Ontology task

    PubMed Central

    2015-01-01

    Background The third edition of the BioNLP Shared Task was held with the grand theme "knowledge base construction (KB)". The Genia Event (GE) task was re-designed and implemented in light of this theme. For its final report, the participating systems were evaluated from a perspective of annotation. To further explore the grand theme, we extended the evaluation from a perspective of KB construction. Also, the Gene Regulation Ontology (GRO) task was newly introduced in the third edition. The final evaluation of the participating systems resulted in relatively low performance. The reason was attributed to the large size and complex semantic representation of the ontology. To investigate potential benefits of resource exchange between the presumably similar tasks, we measured the overlap between the datasets of the two tasks, and tested whether the dataset for one task can be used to enhance performance on the other. Results We report an extended evaluation on all the participating systems in the GE task, incoporating a KB perspective. For the evaluation, the final submission of each participant was converted to RDF statements, and evaluated using 8 queries that were formulated in SPARQL. The results suggest that the evaluation may be concluded differently between the two different perspectives, annotation vs. KB. We also provide a comparison of the GE and GRO tasks by converting their datasets into each other's format. More than 90% of the GE data could be converted into the GRO task format, while only half of the GRO data could be mapped to the GE task format. The imbalance in conversion indicates that the GRO is a comprehensive extension of the GE task ontology. We further used the converted GRO data as additional training data for the GE task, which helped improve GE task participant system performance. However, the converted GE data did not help GRO task participants, due to overfitting and the ontology gap. PMID:26202680

  5. The Ontology for Biomedical Investigations.

    PubMed

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed in association with OBI. The current release of OBI is available at http://purl.obolibrary.org/obo/obi.owl.

  6. The Ontology for Biomedical Investigations

    PubMed Central

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H.; Chibucos, Marcus C.; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A.; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L.; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A.; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H.; Schober, Daniel; Smith, Barry; Soldatova, Larisa N.; Stoeckert, Christian J.; Taylor, Chris F.; Torniai, Carlo; Turner, Jessica A.; Vita, Randi; Whetzel, Patricia L.; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed in association with OBI. The current release of OBI is available at http://purl.obolibrary.org/obo/obi.owl. PMID:27128319

  7. Interoperable Data Sharing for Diverse Scientific Disciplines

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Martinez, Santa; Law, Emily; Hardman, Sean

    2016-04-01

    For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework using ontologies and ISO level archive and metadata registry reference models. This framework provides multi-level governance, evolves independent of implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation framework is populated through knowledge acquisition from discipline experts. It is also extended to meet specific discipline requirements. The result is a formalized and rigorous knowledge base that addresses data representation, integrity, provenance, context, quantity, and their relationships within the community. The contents of the knowledge base is translated and written to files in appropriate formats to configure system software and services, provide user documentation, validate ingested data, and support data analytics. This presentation will provide an overview of the framework, present the Planetary Data System's PDS4 as a use case that has been adopted by the international planetary science community, describe how the framework is being applied to other disciplines, and share some important lessons learned.

  8. Design and Evaluation of a Bacterial Clinical Infectious Diseases Ontology

    PubMed Central

    Gordon, Claire L.; Pouch, Stephanie; Cowell, Lindsay G.; Boland, Mary Regina; Platt, Heather L.; Goldfain, Albert; Weng, Chunhua

    2013-01-01

    With antimicrobial resistance increasing worldwide, there is a great need to use automated antimicrobial decision support systems (ADSSs) to lower antimicrobial resistance rates by promoting appropriate antimicrobial use. However, they are infrequently used mostly because of their poor interoperability with different health information technologies. Ontologies can augment portable ADSSs by providing an explicit knowledge representation for biomedical entities and their relationships, helping to standardize and integrate heterogeneous data resources. We developed a bacterial clinical infectious diseases ontology (BCIDO) using Protégé-OWL. BCIDO defines a controlled terminology for clinical infectious diseases along with domain knowledge commonly used in hospital settings for clinical infectious disease treatment decision-making. BCIDO has 599 classes and 2355 object properties. Terms were imported from or mapped to Systematized Nomenclature of Medicine, Unified Medical Language System, RxNorm and National Center for Bitechnology Information Organismal Classification where possible. Domain expert evaluation using the “laddering” technique, ontology visualization, and clinical notes and scenarios, confirmed the correctness and potential usefulness of BCIDO. PMID:24551353

  9. Perceptual Pragmatism and the Naturalized Ontology of Color.

    PubMed

    Chirimuuta, Mazviita

    2017-01-01

    This paper considers whether there can be any such thing as a naturalized metaphysics of color-any distillation of the commitments of perceptual science with regard to color ontology. I first make some observations about the kinds of philosophical commitments that sometimes bubble to the surface in the psychology and neuroscience of color. Unsurprisingly, because of the range of opinions expressed, an ontology of color cannot simply be read off from scientists' definitions and theoretical statements. I next consider two alternative routes. First, conceptual pluralism inspired by Mark Wilson's analysis of scientific representation. I argue that these findings leave the prospects for a naturalized color ontology rather dim. Second, I outline a naturalized epistemology of perception. I ask how the correctness and informativeness of perceptual states is understood by contemporary perceptual science. I argue that the detectionist ideal of correspondence should be replaced by the pragmatic ideal of usefulness. I argue that this result has significant implications for the metaphysics of color. Copyright © 2016 Cognitive Science Society, Inc.

  10. Modulated evaluation metrics for drug-based ontologies.

    PubMed

    Amith, Muhammad; Tao, Cui

    2017-04-24

    Research for ontology evaluation is scarce. If biomedical ontological datasets and knowledgebases are to be widely used, there needs to be quality control and evaluation for the content and structure of the ontology. This paper introduces how to effectively utilize a semiotic-inspired approach to ontology evaluation, specifically towards drug-related ontologies hosted on the National Center for Biomedical Ontology BioPortal. Using the semiotic-based evaluation framework for drug-based ontologies, we adjusted the quality metrics based on the semiotic features of drug ontologies. Then, we compared the quality scores before and after tailoring. The scores revealed a more precise measurement and a closer distribution compared to the before-tailoring. The results of this study reveal that a tailored semiotic evaluation produced a more meaningful and accurate assessment of drug-based ontologies, lending to the possible usefulness of semiotics in ontology evaluation.

  11. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  12. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies

    PubMed Central

    2017-01-01

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages “maps” and “maptools” to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data. PMID:29083911

  13. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.

    PubMed

    Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus

    2018-01-05

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.

  14. A Chado case study: an ontology-based modular schema for representing genome-associated biological information.

    PubMed

    Mungall, Christopher J; Emmert, David B

    2007-07-01

    A few years ago, FlyBase undertook to design a new database schema to store Drosophila data. It would fully integrate genomic sequence and annotation data with bibliographic, genetic, phenotypic and molecular data from the literature representing a distillation of the first 100 years of research on this major animal model system. In developing this new integrated schema, FlyBase also made a commitment to ensure that its design was generic, extensible and available as open source, so that it could be employed as the core schema of any model organism data repository, thereby avoiding redundant software development and potentially increasing interoperability. Our question was whether we could create a relational database schema that would be successfully reused. Chado is a relational database schema now being used to manage biological knowledge for a wide variety of organisms, from human to pathogens, especially the classes of information that directly or indirectly can be associated with genome sequences or the primary RNA and protein products encoded by a genome. Biological databases that conform to this schema can interoperate with one another, and with application software from the Generic Model Organism Database (GMOD) toolkit. Chado is distinctive because its design is driven by ontologies. The use of ontologies (or controlled vocabularies) is ubiquitous across the schema, as they are used as a means of typing entities. The Chado schema is partitioned into integrated subschemas (modules), each encapsulating a different biological domain, and each described using representations in appropriate ontologies. To illustrate this methodology, we describe here the Chado modules used for describing genomic sequences. GMOD is a collaboration of several model organism database groups, including FlyBase, to develop a set of open-source software for managing model organism data. The Chado schema is freely distributed under the terms of the Artistic License (http://www.opensource.org/licenses/artistic-license.php) from GMOD (www.gmod.org).

  15. Leveraging electronic healthcare record standards and semantic web technologies for the identification of patient cohorts

    PubMed Central

    Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto; Marcos, Mar; Legaz-García, María del Carmen; Moner, David; Torres-Sospedra, Joaquín; Esteban-Gil, Angel; Martínez-Salvador, Begoña; Robles, Montserrat

    2013-01-01

    Background The secondary use of electronic healthcare records (EHRs) often requires the identification of patient cohorts. In this context, an important problem is the heterogeneity of clinical data sources, which can be overcome with the combined use of standardized information models, virtual health records, and semantic technologies, since each of them contributes to solving aspects related to the semantic interoperability of EHR data. Objective To develop methods allowing for a direct use of EHR data for the identification of patient cohorts leveraging current EHR standards and semantic web technologies. Materials and methods We propose to take advantage of the best features of working with EHR standards and ontologies. Our proposal is based on our previous results and experience working with both technological infrastructures. Our main principle is to perform each activity at the abstraction level with the most appropriate technology available. This means that part of the processing will be performed using archetypes (ie, data level) and the rest using ontologies (ie, knowledge level). Our approach will start working with EHR data in proprietary format, which will be first normalized and elaborated using EHR standards and then transformed into a semantic representation, which will be exploited by automated reasoning. Results We have applied our approach to protocols for colorectal cancer screening. The results comprise the archetypes, ontologies, and datasets developed for the standardization and semantic analysis of EHR data. Anonymized real data have been used and the patients have been successfully classified by the risk of developing colorectal cancer. Conclusions This work provides new insights in how archetypes and ontologies can be effectively combined for EHR-driven phenotyping. The methodological approach can be applied to other problems provided that suitable archetypes, ontologies, and classification rules can be designed. PMID:23934950

  16. Gene Ontology synonym generation rules lead to increased performance in biomedical concept recognition.

    PubMed

    Funk, Christopher S; Cohen, K Bretonnel; Hunter, Lawrence E; Verspoor, Karin M

    2016-09-09

    Gene Ontology (GO) terms represent the standard for annotation and representation of molecular functions, biological processes and cellular compartments, but a large gap exists between the way concepts are represented in the ontology and how they are expressed in natural language text. The construction of highly specific GO terms is formulaic, consisting of parts and pieces from more simple terms. We present two different types of manually generated rules to help capture the variation of how GO terms can appear in natural language text. The first set of rules takes into account the compositional nature of GO and recursively decomposes the terms into their smallest constituent parts. The second set of rules generates derivational variations of these smaller terms and compositionally combines all generated variants to form the original term. By applying both types of rules, new synonyms are generated for two-thirds of all GO terms and an increase in F-measure performance for recognition of GO on the CRAFT corpus from 0.498 to 0.636 is observed. Additionally, we evaluated the combination of both types of rules over one million full text documents from Elsevier; manual validation and error analysis show we are able to recognize GO concepts with reasonable accuracy (88 %) based on random sampling of annotations. In this work we present a set of simple synonym generation rules that utilize the highly compositional and formulaic nature of the Gene Ontology concepts. We illustrate how the generated synonyms aid in improving recognition of GO concepts on two different biomedical corpora. We discuss other applications of our rules for GO ontology quality assurance, explore the issue of overgeneration, and provide examples of how similar methodologies could be applied to other biomedical terminologies. Additionally, we provide all generated synonyms for use by the text-mining community.

  17. Leveraging electronic healthcare record standards and semantic web technologies for the identification of patient cohorts.

    PubMed

    Fernández-Breis, Jesualdo Tomás; Maldonado, José Alberto; Marcos, Mar; Legaz-García, María del Carmen; Moner, David; Torres-Sospedra, Joaquín; Esteban-Gil, Angel; Martínez-Salvador, Begoña; Robles, Montserrat

    2013-12-01

    The secondary use of electronic healthcare records (EHRs) often requires the identification of patient cohorts. In this context, an important problem is the heterogeneity of clinical data sources, which can be overcome with the combined use of standardized information models, virtual health records, and semantic technologies, since each of them contributes to solving aspects related to the semantic interoperability of EHR data. To develop methods allowing for a direct use of EHR data for the identification of patient cohorts leveraging current EHR standards and semantic web technologies. We propose to take advantage of the best features of working with EHR standards and ontologies. Our proposal is based on our previous results and experience working with both technological infrastructures. Our main principle is to perform each activity at the abstraction level with the most appropriate technology available. This means that part of the processing will be performed using archetypes (ie, data level) and the rest using ontologies (ie, knowledge level). Our approach will start working with EHR data in proprietary format, which will be first normalized and elaborated using EHR standards and then transformed into a semantic representation, which will be exploited by automated reasoning. We have applied our approach to protocols for colorectal cancer screening. The results comprise the archetypes, ontologies, and datasets developed for the standardization and semantic analysis of EHR data. Anonymized real data have been used and the patients have been successfully classified by the risk of developing colorectal cancer. This work provides new insights in how archetypes and ontologies can be effectively combined for EHR-driven phenotyping. The methodological approach can be applied to other problems provided that suitable archetypes, ontologies, and classification rules can be designed.

  18. MachineProse: an Ontological Framework for Scientific Assertions

    PubMed Central

    Dinakarpandian, Deendayal; Lee, Yugyung; Vishwanath, Kartik; Lingambhotla, Rohini

    2006-01-01

    Objective: The idea of testing a hypothesis is central to the practice of biomedical research. However, the results of testing a hypothesis are published mainly in the form of prose articles. Encoding the results as scientific assertions that are both human and machine readable would greatly enhance the synergistic growth and dissemination of knowledge. Design: We have developed MachineProse (MP), an ontological framework for the concise specification of scientific assertions. MP is based on the idea of an assertion constituting a fundamental unit of knowledge. This is in contrast to current approaches that use discrete concept terms from domain ontologies for annotation and assertions are only inferred heuristically. Measurements: We use illustrative examples to highlight the advantages of MP over the use of the Medical Subject Headings (MeSH) system and keywords in indexing scientific articles. Results: We show how MP makes it possible to carry out semantic annotation of publications that is machine readable and allows for precise search capabilities. In addition, when used by itself, MP serves as a knowledge repository for emerging discoveries. A prototype for proof of concept has been developed that demonstrates the feasibility and novel benefits of MP. As part of the MP framework, we have created an ontology of relationship types with about 100 terms optimized for the representation of scientific assertions. Conclusion: MachineProse is a novel semantic framework that we believe may be used to summarize research findings, annotate biomedical publications, and support sophisticated searches. PMID:16357355

  19. Community-based Ontology Development, Annotation and Discussion with MediaWiki extension Ontokiwi and Ontokiwi-based Ontobedia

    PubMed Central

    Ong, Edison; He, Yongqun

    2016-01-01

    Hundreds of biological and biomedical ontologies have been developed to support data standardization, integration and analysis. Although ontologies are typically developed for community usage, community efforts in ontology development are limited. To support ontology visualization, distribution, and community-based annotation and development, we have developed Ontokiwi, an ontology extension to the MediaWiki software. Ontokiwi displays hierarchical classes and ontological axioms. Ontology classes and axioms can be edited and added using Ontokiwi form or MediaWiki source editor. Ontokiwi also inherits MediaWiki features such as Wikitext editing and version control. Based on the Ontokiwi/MediaWiki software package, we have developed Ontobedia, which targets to support community-based development and annotations of biological and biomedical ontologies. As demonstrations, we have loaded the Ontology of Adverse Events (OAE) and the Cell Line Ontology (CLO) into Ontobedia. Our studies showed that Ontobedia was able to achieve expected Ontokiwi features. PMID:27570653

  20. Development of an Ontology for Periodontitis.

    PubMed

    Suzuki, Asami; Takai-Igarashi, Takako; Nakaya, Jun; Tanaka, Hiroshi

    2015-01-01

    In the clinical dentists and periodontal researchers' community, there is an obvious demand for a systems model capable of linking the clinical presentation of periodontitis to underlying molecular knowledge. A computer-readable representation of processes on disease development will give periodontal researchers opportunities to elucidate pathways and mechanisms of periodontitis. An ontology for periodontitis can be a model for integration of large variety of factors relating to a complex disease such as chronic inflammation in different organs accompanied by bone remodeling and immune system disorders, which has recently been referred to as osteoimmunology. Terms characteristic of descriptions related to the onset and progression of periodontitis were manually extracted from 194 review articles and PubMed abstracts by experts in periodontology. We specified all the relations between the extracted terms and constructed them into an ontology for periodontitis. We also investigated matching between classes of our ontology and that of Gene Ontology Biological Process. We developed an ontology for periodontitis called Periodontitis-Ontology (PeriO). The pathological progression of periodontitis is caused by complex, multi-factor interrelationships. PeriO consists of all the required concepts to represent the pathological progression and clinical treatment of periodontitis. The pathological processes were formalized with reference to Basic Formal Ontology and Relation Ontology, which accounts for participants in the processes realized by biological objects such as molecules and cells. We investigated the peculiarity of biological processes observed in pathological progression and medical treatments for the disease in comparison with Gene Ontology Biological Process (GO-BP) annotations. The results indicated that peculiarities of Perio existed in 1) granularity and context dependency of both the conceptualizations, and 2) causality intrinsic to the pathological processes. PeriO defines more specific concepts than GO-BP, and thus can be added as descendants of GO-BP leaf nodes. PeriO defines causal relationships between the process concepts, which are not shown in GO-BP. The difference can be explained by the goal of conceptualization: PeriO focuses on mechanisms of the pathogenic progress, while GO-BP focuses on cataloguing all of the biological processes observed in experiments. The goal of conceptualization in PeriO may reflect the domain knowledge where a consequence in the causal relationships is a primary interest. We believe the peculiarities can be shared among other diseases when comparing processes in disease against GO-BP. This is the first open biomedical ontology of periodontitis capable of providing a foundation for an ontology-based model of aspects of molecular biology and pathological processes related to periodontitis, as well as its relations with systemic diseases. PeriO is available at http://bio-omix.tmd.ac.jp/periodontitis/.

  1. Ontology-based approach for in vivo human connectomics: the medial Brodmann area 6 case study

    PubMed Central

    Moreau, Tristan; Gibaud, Bernard

    2015-01-01

    Different non-invasive neuroimaging modalities and multi-level analysis of human connectomics datasets yield a great amount of heterogeneous data which are hard to integrate into an unified representation. Biomedical ontologies can provide a suitable integrative framework for domain knowledge as well as a tool to facilitate information retrieval, data sharing and data comparisons across scales, modalities and species. Especially, it is urgently needed to fill the gap between neurobiology and in vivo human connectomics in order to better take into account the reality highlighted in Magnetic Resonance Imaging (MRI) and relate it to existing brain knowledge. The aim of this study was to create a neuroanatomical ontology, called “Human Connectomics Ontology” (HCO), in order to represent macroscopic gray matter regions connected with fiber bundles assessed by diffusion tractography and to annotate MRI connectomics datasets acquired in the living human brain. First a neuroanatomical “view” called NEURO-DL-FMA was extracted from the reference ontology Foundational Model of Anatomy (FMA) in order to construct a gross anatomy ontology of the brain. HCO extends NEURO-DL-FMA by introducing entities (such as “MR_Node” and “MR_Route”) and object properties (such as “tracto_connects”) pertaining to MR connectivity. The Web Ontology Language Description Logics (OWL DL) formalism was used in order to enable reasoning with common reasoning engines. Moreover, an experimental work was achieved in order to demonstrate how the HCO could be effectively used to address complex queries concerning in vivo MRI connectomics datasets. Indeed, neuroimaging datasets of five healthy subjects were annotated with terms of the HCO and a multi-level analysis of the connectivity patterns assessed by diffusion tractography of the right medial Brodmann Area 6 was achieved using a set of queries. This approach can facilitate comparison of data across scales, modalities and species. PMID:25914640

  2. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    PubMed

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building 'digital libraries' of distributed CDS services that can be hosted and maintained in different organizations. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  4. Desiderata for ontologies to be used in semantic annotation of biomedical documents.

    PubMed

    Bada, Michael; Hunter, Lawrence

    2011-02-01

    A wealth of knowledge valuable to the translational research scientist is contained within the vast biomedical literature, but this knowledge is typically in the form of natural language. Sophisticated natural-language-processing systems are needed to translate text into unambiguous formal representations grounded in high-quality consensus ontologies, and these systems in turn rely on gold-standard corpora of annotated documents for training and testing. To this end, we are constructing the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-text biomedical journal articles that are being manually annotated with the entire sets of terms from select vocabularies, predominantly from the Open Biomedical Ontologies (OBO) library. Our efforts in building this corpus has illuminated infelicities of these ontologies with respect to the semantic annotation of biomedical documents, and we propose desiderata whose implementation could substantially improve their utility in this task; these include the integration of overlapping terms across OBOs, the resolution of OBO-specific ambiguities, the integration of the BFO with the OBOs and the use of mid-level ontologies, the inclusion of noncanonical instances, and the expansion of relations and realizable entities. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Progress toward a Semantic eScience Framework; building on advanced cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    McGuinness, D. L.; Fox, P. A.; West, P.; Rozell, E.; Zednik, S.; Chang, C.

    2010-12-01

    The configurable and extensible semantic eScience framework (SESF) has begun development and implementation of several semantic application components. Extensions and improvements to several ontologies have been made based on distinct interdisciplinary use cases ranging from solar physics, to biologicl and chemical oceanography. Importantly, these semantic representations mediate access to a diverse set of existing and emerging cyberinfrastructure. Among the advances are the population of triple stores with web accessible query services. A triple store is akin to a relational data store where the basic stored unit is a subject-predicate-object tuple. Access via a query is provided by the W3 Recommendation language specification SPARQL. Upon this middle tier of semantic cyberinfrastructure, we have developed several forms of semantic faceted search, including provenance-awareness. We report on the rapid advances in semantic technologies and tools and how we are sustaining the software path for the required technical advances as well as the ontology improvements and increased functionality of the semantic applications including how they are integrated into web-based portals (e.g. Drupal) and web services. Lastly, we indicate future work direction and opportunities for collaboration.

  6. Situation exploration in a persistent surveillance system with multidimensional data

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.

    2013-03-01

    There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.

  7. A Web-Based Data-Querying Tool Based on Ontology-Driven Methodology and Flowchart-Based Model

    PubMed Central

    Ping, Xiao-Ou; Chung, Yufang; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei

    2013-01-01

    Background Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. Objective The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. Methods The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. Results In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, “degree of liver damage,” “degree of liver damage when applying a mutually exclusive setting,” and “treatments for liver cancer”) was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. Conclusions The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks. PMID:25600078

  8. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  9. Semantic e-Science in Space Physics - A Case Study

    NASA Astrophysics Data System (ADS)

    Narock, T.; Yoon, V.; Merka, J.; Szabo, A.

    2009-05-01

    Several search and retrieval systems for space physics data are currently under development in NASA's heliophysics data environment. We present a case study of two such systems, and describe our efforts in implementing an ontology to aid in data discovery. In doing so we highlight the various aspects of knowledge representation and show how they led to our ontology design, creation, and implementation. We discuss advantages that scientific reasoning allows, as well as difficulties encountered in current tools and standards. Finally, we present a space physics research project conducted with and without e-Science and contrast the two approaches.

  10. Roles and applications of biomedical ontologies in experimental animal science.

    PubMed

    Masuya, Hiroshi

    2012-01-01

    A huge amount of experimental data from past studies has played a vital role in the development of new knowledge and technologies in biomedical science. The importance of computational technologies for the reuse of data, data integration, and knowledge discoveries has also increased, providing means of processing large amounts of data. In recent years, information technologies related to "ontologies" have played more significant roles in the standardization, integration, and knowledge representation of biomedical information. This review paper outlines the history of data integration in biomedical science and its recent trends in relation to the field of experimental animal science.

  11. Comparative analysis of knowledge representation and reasoning requirements across a range of life sciences textbooks.

    PubMed

    Chaudhri, Vinay K; Elenius, Daniel; Goldenkranz, Andrew; Gong, Allison; Martone, Maryann E; Webb, William; Yorke-Smith, Neil

    2014-01-01

    Using knowledge representation for biomedical projects is now commonplace. In previous work, we represented the knowledge found in a college-level biology textbook in a fashion useful for answering questions. We showed that embedding the knowledge representation and question-answering abilities in an electronic textbook helped to engage student interest and improve learning. A natural question that arises from this success, and this paper's primary focus, is whether a similar approach is applicable across a range of life science textbooks. To answer that question, we considered four different textbooks, ranging from a below-introductory college biology text to an advanced, graduate-level neuroscience textbook. For these textbooks, we investigated the following questions: (1) To what extent is knowledge shared between the different textbooks? (2) To what extent can the same upper ontology be used to represent the knowledge found in different textbooks? (3) To what extent can the questions of interest for a range of textbooks be answered by using the same reasoning mechanisms? Our existing modeling and reasoning methods apply especially well both to a textbook that is comparable in level to the text studied in our previous work (i.e., an introductory-level text) and to a textbook at a lower level, suggesting potential for a high degree of portability. Even for the overlapping knowledge found across the textbooks, the level of detail covered in each textbook was different, which requires that the representations must be customized for each textbook. We also found that for advanced textbooks, representing models and scientific reasoning processes was particularly important. With some additional work, our representation methodology would be applicable to a range of textbooks. The requirements for knowledge representation are common across textbooks, suggesting that a shared semantic infrastructure for the life sciences is feasible. Because our representation overlaps heavily with those already being used for biomedical ontologies, this work suggests a natural pathway to include such representations as part of the life sciences curriculum at different grade levels.

  12. MBSE-Driven Visualization of Requirements Allocation and Traceability

    NASA Technical Reports Server (NTRS)

    Jackson, Maddalena; Wilkerson, Marcus

    2016-01-01

    In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.

  13. SEE: structured representation of scientific evidence in the biomedical domain using Semantic Web techniques

    PubMed Central

    2014-01-01

    Background Accounts of evidence are vital to evaluate and reproduce scientific findings and integrate data on an informed basis. Currently, such accounts are often inadequate, unstandardized and inaccessible for computational knowledge engineering even though computational technologies, among them those of the semantic web, are ever more employed to represent, disseminate and integrate biomedical data and knowledge. Results We present SEE (Semantic EvidencE), an RDF/OWL based approach for detailed representation of evidence in terms of the argumentative structure of the supporting background for claims even in complex settings. We derive design principles and identify minimal components for the representation of evidence. We specify the Reasoning and Discourse Ontology (RDO), an OWL representation of the model of scientific claims, their subjects, their provenance and their argumentative relations underlying the SEE approach. We demonstrate the application of SEE and illustrate its design patterns in a case study by providing an expressive account of the evidence for certain claims regarding the isolation of the enzyme glutamine synthetase. Conclusions SEE is suited to provide coherent and computationally accessible representations of evidence-related information such as the materials, methods, assumptions, reasoning and information sources used to establish a scientific finding by adopting a consistently claim-based perspective on scientific results and their evidence. SEE allows for extensible evidence representations, in which the level of detail can be adjusted and which can be extended as needed. It supports representation of arbitrary many consecutive layers of interpretation and attribution and different evaluations of the same data. SEE and its underlying model could be a valuable component in a variety of use cases that require careful representation or examination of evidence for data presented on the semantic web or in other formats. PMID:25093070

  14. SEE: structured representation of scientific evidence in the biomedical domain using Semantic Web techniques.

    PubMed

    Bölling, Christian; Weidlich, Michael; Holzhütter, Hermann-Georg

    2014-01-01

    Accounts of evidence are vital to evaluate and reproduce scientific findings and integrate data on an informed basis. Currently, such accounts are often inadequate, unstandardized and inaccessible for computational knowledge engineering even though computational technologies, among them those of the semantic web, are ever more employed to represent, disseminate and integrate biomedical data and knowledge. We present SEE (Semantic EvidencE), an RDF/OWL based approach for detailed representation of evidence in terms of the argumentative structure of the supporting background for claims even in complex settings. We derive design principles and identify minimal components for the representation of evidence. We specify the Reasoning and Discourse Ontology (RDO), an OWL representation of the model of scientific claims, their subjects, their provenance and their argumentative relations underlying the SEE approach. We demonstrate the application of SEE and illustrate its design patterns in a case study by providing an expressive account of the evidence for certain claims regarding the isolation of the enzyme glutamine synthetase. SEE is suited to provide coherent and computationally accessible representations of evidence-related information such as the materials, methods, assumptions, reasoning and information sources used to establish a scientific finding by adopting a consistently claim-based perspective on scientific results and their evidence. SEE allows for extensible evidence representations, in which the level of detail can be adjusted and which can be extended as needed. It supports representation of arbitrary many consecutive layers of interpretation and attribution and different evaluations of the same data. SEE and its underlying model could be a valuable component in a variety of use cases that require careful representation or examination of evidence for data presented on the semantic web or in other formats.

  15. GALEN: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    2000-09-01

    Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.

  16. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  17. A patient workflow management system built on guidelines.

    PubMed Central

    Dazzi, L.; Fassino, C.; Saracco, R.; Quaglini, S.; Stefanelli, M.

    1997-01-01

    To provide high quality, shared, and distributed medical care, clinical and organizational issues need to be integrated. This work describes a methodology for developing a Patient Workflow Management System, based on a detailed model of both the medical work process and the organizational structure. We assume that the medical work process is represented through clinical practice guidelines, and that an ontological description of the organization is available. Thus, we developed tools 1) for acquiring the medical knowledge contained into a guideline, 2) to translate the derived formalized guideline into a computational formalism, precisely a Petri Net, 3) to maintain different representation levels. The high level representation guarantees that the Patient Workflow follows the guideline prescriptions, while the low level takes into account the specific organization characteristics and allow allocating resources for managing a specific patient in daily practice. PMID:9357606

  18. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. PATIKA: an integrated visual environment for collaborative construction and analysis of cellular pathways.

    PubMed

    Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M

    2002-07-01

    Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.

  20. Ockham's Razor and Plato's Beard.

    ERIC Educational Resources Information Center

    Orton, Robert E.

    1995-01-01

    Response to an earlier article in JRME in which the authors propose a constructivist alternative to the representational view of mind. Argues that the original article misinterprets the postepistemological perspective, confuses ontological and epistemological issues, and mistakes the pragmatic force of the constructivist argument. (45 references)…

  1. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  2. Proposed actions are no actions: re-modeling an ontology design pattern with a realist top-level ontology.

    PubMed

    Seddig-Raufie, Djamila; Jansen, Ludger; Schober, Daniel; Boeker, Martin; Grewe, Niels; Schulz, Stefan

    2012-09-21

    Ontology Design Patterns (ODPs) are representational artifacts devised to offer solutions for recurring ontology design problems. They promise to enhance the ontology building process in terms of flexibility, re-usability and expansion, and to make the result of ontology engineering more predictable. In this paper, we analyze ODP repositories and investigate their relation with upper-level ontologies. In particular, we compare the BioTop upper ontology to the Action ODP from the NeOn an ODP repository. In view of the differences in the respective approaches, we investigate whether the Action ODP can be embedded into BioTop. We demonstrate that this requires re-interpreting the meaning of classes of the NeOn Action ODP in the light of the precepts of realist ontologies. As a result, the re-design required clarifying the ontological commitment of the ODP classes by assigning them to top-level categories. Thus, ambiguous definitions are avoided. Classes of real entities are clearly distinguished from classes of information artifacts. The proposed approach avoids the commitment to the existence of unclear future entities which underlies the NeOn Action ODP. Our re-design is parsimonious in the sense that existing BioTop content proved to be largely sufficient to define the different types of actions and plans. The proposed model demonstrates that an expressive upper-level ontology provides enough resources and expressivity to represent even complex ODPs, here shown with the different flavors of Action as proposed in the NeOn ODP. The advantage of ODP inclusion into a top-level ontology is the given predetermined dependency of each class, an existing backbone structure and well-defined relations. Our comparison shows that the use of some ODPs is more likely to cause problems for ontology developers, rather than to guide them. Besides the structural properties, the explanation of classification results were particularly hard to grasp for 'self-sufficient' ODPs as compared with implemented and 'embedded' upper-level structures which, for example in the case of BioTop, offer a detailed description of classes and relations in an axiomatic network. This ensures unambiguous interpretation and provides more concise constraints to leverage on in the ontology engineering process.

  3. Dimensions of Drug Information

    ERIC Educational Resources Information Center

    Sharp, Mark E.

    2011-01-01

    The high number, heterogeneity, and inadequate integration of drug information resources constitute barriers to many drug information usage scenarios. In the biomedical domain there is a rich legacy of knowledge representation in ontology-like structures that allows us to connect this problem both to the very mature field of library and…

  4. Surveying Medieval Archaeology: a New Form for Harris Paradigm Linking Photogrammetry and Temporal Relations

    NASA Astrophysics Data System (ADS)

    Drap, P.; Papini, O.; Pruno, E.; Nucciotti, M.; Vannini, G.

    2017-02-01

    The paper presents some reflexions concerning an interdisciplinary project between Medieval Archaeologists from the University of Florence (Italy) and ICT researchers from CNRS LSIS of Marseille (France), aiming towards a connection between 3D spatial representation and archaeological knowledge. It is well known that Laser Scanner, Photogrammetry and Computer Vision are very attractive tools for archaeologists, although the integration of representation of space and representation of archaeological time has not yet found a methodological standard of reference. We try to develop an integrated system for archaeological 3D survey and all other types of archaeological data and knowledge through integrating observable (material) and non-graphic (interpretive) data. Survey plays a central role, since it is both a metric representation of the archaeological site and, to a wider extent, an interpretation of it (being also a common basis for communication between the 2 teams). More specifically 3D survey is crucial, allowing archaeologists to connect actual spatial assets to the stratigraphic formation processes (i.e. to the archaeological time) and to translate spatial observations into historical interpretation of the site. We propose a common formalism for describing photogrammetrical survey and archaeological knowledge stemming from ontologies: Indeed, ontologies are fully used to model and store 3D data and archaeological knowledge. Xe equip this formalism with a qualitative representation of time. Stratigraphic analyses (both of excavated deposits and of upstanding structures) are closely related to E. C. Harris theory of "Stratigraphic Unit" ("US" from now on). Every US is connected to the others by geometric, topological and, eventually, temporal links, and are recorded by the 3D photogrammetric survey. However, the limitations of the Harris Matrix approach lead to use another representation formalism for stratigraphic relationships, namely Qualitative Constraints Networks (QCN) successfully used in the domain of knowledge representation and reasoning in artificial intelligence for representing temporal relations.

  5. Assessing the impact of case sensitivity and term information gain on biomedical concept recognition.

    PubMed

    Groza, Tudor; Verspoor, Karin

    2015-01-01

    Concept recognition (CR) is a foundational task in the biomedical domain. It supports the important process of transforming unstructured resources into structured knowledge. To date, several CR approaches have been proposed, most of which focus on a particular set of biomedical ontologies. Their underlying mechanisms vary from shallow natural language processing and dictionary lookup to specialized machine learning modules. However, no prior approach considers the case sensitivity characteristics and the term distribution of the underlying ontology on the CR process. This article proposes a framework that models the CR process as an information retrieval task in which both case sensitivity and the information gain associated with tokens in lexical representations (e.g., term labels, synonyms) are central components of a strategy for generating term variants. The case sensitivity of a given ontology is assessed based on the distribution of so-called case sensitive tokens in its terms, while information gain is modelled using a combination of divergence from randomness and mutual information. An extensive evaluation has been carried out using the CRAFT corpus. Experimental results show that case sensitivity awareness leads to an increase of up to 0.07 F1 against a non-case sensitive baseline on the Protein Ontology and GO Cellular Component. Similarly, the use of information gain leads to an increase of up to 0.06 F1 against a standard baseline in the case of GO Biological Process and Molecular Function and GO Cellular Component. Overall, subject to the underlying token distribution, these methods lead to valid complementary strategies for augmenting term label sets to improve concept recognition.

  6. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    PubMed

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.

    PubMed

    Burek, Patryk; Loebe, Frank; Herre, Heinrich

    2017-10-04

    Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.

  8. The Non-Coding RNA Ontology (NCRO): a comprehensive resource for the unification of non-coding RNA biology.

    PubMed

    Huang, Jingshan; Eilbeck, Karen; Smith, Barry; Blake, Judith A; Dou, Dejing; Huang, Weili; Natale, Darren A; Ruttenberg, Alan; Huan, Jun; Zimmermann, Michael T; Jiang, Guoqian; Lin, Yu; Wu, Bin; Strachan, Harrison J; He, Yongqun; Zhang, Shaojie; Wang, Xiaowei; Liu, Zixing; Borchert, Glen M; Tan, Ming

    2016-01-01

    In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/ncro.owl.

  9. The Ontological Representation of Death: A Scale to Measure the Idea of Annihilation Versus Passage.

    PubMed

    Testoni, Ines; Ancona, Dorella; Ronconi, Lucia

    2015-01-01

    Since the borders between natural life and death have been blurred by technique, in Western societies discussions and practices regarding death have became infinite. The studies in this area include all the most important topics of psychology, sociology, and philosophy. From a psychological point of view, the research has created many instruments for measuring death anxiety, fear, threat, depression, meaning of life, and among them, the profiles on death attitude are innumerable. This research presents the validation of a new attitude scale, which conjoins psychological dimensions and philosophical ones. This scale may be useful because the ontological idea of death has not yet been considered in research. The hypothesis is that it is different to believe that death is absolute annihilation than to be sure that it is a passage or a transformation of one's personal identity. The hypothetical difference results in a greater inner suffering caused by the former idea. In order to measure this possibility, we analyzed the correlation between Testoni Death Representation Scale and Beck Hopelessness Scale, Suicide Resilience Inventory-25, and Reasons for Living Inventory. The results confirm the hypothesis, showing that the representation of death as total annihilation is positively correlated to hopelessness and negatively correlated to resilience.

  10. RFID sensor-tags feeding a context-aware rule-based healthcare monitoring system.

    PubMed

    Catarinucci, Luca; Colella, Riccardo; Esposito, Alessandra; Tarricone, Luciano; Zappatore, Marco

    2012-12-01

    Along with the growing of the aging population and the necessity of efficient wellness systems, there is a mounting demand for new technological solutions able to support remote and proactive healthcare. An answer to this need could be provided by the joint use of the emerging Radio Frequency Identification (RFID) technologies and advanced software choices. This paper presents a proposal for a context-aware infrastructure for ubiquitous and pervasive monitoring of heterogeneous healthcare-related scenarios, fed by RFID-based wireless sensors nodes. The software framework is based on a general purpose architecture exploiting three key implementation choices: ontology representation, multi-agent paradigm and rule-based logic. From the hardware point of view, the sensing and gathering of context-data is demanded to a new Enhanced RFID Sensor-Tag. This new device, de facto, makes possible the easy integration between RFID and generic sensors, guaranteeing flexibility and preserving the benefits in terms of simplicity of use and low cost of UHF RFID technology. The system is very efficient and versatile and its customization to new scenarios requires a very reduced effort, substantially limited to the update/extension of the ontology codification. Its effectiveness is demonstrated by reporting both customization effort and performance results obtained from validation in two different healthcare monitoring contexts.

  11. Making species checklists understandable to machines - a shift from relational databases to ontologies.

    PubMed

    Laurenne, Nina; Tuominen, Jouni; Saarenmaa, Hannu; Hyvönen, Eero

    2014-01-01

    The scientific names of plants and animals play a major role in Life Sciences as information is indexed, integrated, and searched using scientific names. The main problem with names is their ambiguous nature, because more than one name may point to the same taxon and multiple taxa may share the same name. In addition, scientific names change over time, which makes them open to various interpretations. Applying machine-understandable semantics to these names enables efficient processing of biological content in information systems. The first step is to use unique persistent identifiers instead of name strings when referring to taxa. The most commonly used identifiers are Life Science Identifiers (LSID), which are traditionally used in relational databases, and more recently HTTP URIs, which are applied on the Semantic Web by Linked Data applications. We introduce two models for expressing taxonomic information in the form of species checklists. First, we show how species checklists are presented in a relational database system using LSIDs. Then, in order to gain a more detailed representation of taxonomic information, we introduce meta-ontology TaxMeOn to model the same content as Semantic Web ontologies where taxa are identified using HTTP URIs. We also explore how changes in scientific names can be managed over time. The use of HTTP URIs is preferable for presenting the taxonomic information of species checklists. An HTTP URI identifies a taxon and operates as a web address from which additional information about the taxon can be located, unlike LSID. This enables the integration of biological data from different sources on the web using Linked Data principles and prevents the formation of information silos. The Linked Data approach allows a user to assemble information and evaluate the complexity of taxonomical data based on conflicting views of taxonomic classifications. Using HTTP URIs and Semantic Web technologies also facilitate the representation of the semantics of biological data, and in this way, the creation of more "intelligent" biological applications and services.

  12. CHAMPION: Intelligent Hierarchical Reasoning Agents for Enhanced Decision Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, Ryan E.; Greitzer, Frank L.; Noonan, Christine F.

    2011-11-15

    We describe the design and development of an advanced reasoning framework employing semantic technologies, organized within a hierarchy of computational reasoning agents that interpret domain specific information. Designed based on an inspirational metaphor of the pattern recognition functions performed by the human neocortex, the CHAMPION reasoning framework represents a new computational modeling approach that derives invariant knowledge representations through memory-prediction belief propagation processes that are driven by formal ontological language specification and semantic technologies. The CHAMPION framework shows promise for enhancing complex decision making in diverse problem domains including cyber security, nonproliferation and energy consumption analysis.

  13. The Archival Photograph and Its Meaning: Formalisms for Modeling Images

    ERIC Educational Resources Information Center

    Benson, Allen C.

    2009-01-01

    This article explores ontological principles and their potential applications in the formal description of archival photographs. Current archival descriptive practices are reviewed and the larger question is addressed: do archivists who are engaged in describing photographs need a more formalized system of representation, or do existing encoding…

  14. Querying phenotype-genotype relationships on patient datasets using semantic web technology: the example of Cerebrotendinous xanthomatosis.

    PubMed

    Taboada, María; Martínez, Diego; Pilo, Belén; Jiménez-Escrig, Adriano; Robinson, Peter N; Sobrido, María J

    2012-07-31

    Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms required to query patient datasets at different levels of abstraction. The open world assumption is especially good for describing only partially known phenotype-genotype relationships, in a way that is easily extensible. In future, this type of approach could offer researchers a valuable resource to infer new data from patient data for statistical analysis in translational research. In conclusion, phenotype description formalization and mapping to clinical data are two key elements for interchanging knowledge between basic and clinical research.

  15. Extending the DIDEO ontology to include entities from the natural product drug interaction domain of discourse.

    PubMed

    Judkins, John; Tay-Sontheimer, Jessica; Boyce, Richard D; Brochhausen, Mathias

    2018-05-09

    Prompted by the frequency of concomitant use of prescription drugs with natural products, and the lack of knowledge regarding the impact of pharmacokinetic-based natural product-drug interactions (PK-NPDIs), the United States National Center for Complementary and Integrative Health has established a center of excellence for PK-NPDI. The Center is creating a public database to help researchers (primarly pharmacologists and medicinal chemists) to share and access data, results, and methods from PK-NPDI studies. In order to represent the semantics of the data and foster interoperability, we are extending the Drug-Drug Interaction and Evidence Ontology (DIDEO) to include definitions for terms used by the data repository. This is feasible due to a number of similarities between pharmacokinetic drug-drug interactions and PK-NPDIs. To achieve this, we set up an iterative domain analysis in the following steps. In Step 1 PK-NPDI domain experts produce a list of terms and definitions based on data from PK-NPDI studies, in Step 2 an ontology expert creates ontologically appropriate classes and definitions from the list along with class axioms, in Step 3 there is an iterative editing process during which the domain experts and the ontology experts review, assess, and amend class labels and definitions and in Step 4 the ontology expert implements the new classes in the DIDEO development branch. This workflow often results in different labels and definitions for the new classes in DIDEO than the domain experts initially provided; the latter are preserved in DIDEO as separate annotations. Step 1 resulted in a list of 344 terms. During Step 2 we found that 9 of these terms already existed in DIDEO, and 6 existed in other OBO Foundry ontologies. These 6 were imported into DIDEO; additional terms from multiple OBO Foundry ontologies were also imported, either to serve as superclasses for new terms in the initial list or to build axioms for these terms. At the time of writing, 7 terms have definitions ready for review (Step 2), 64 are ready for implementation (Step 3) and 112 have been pushed to DIDEO (Step 4). Step 2 also suggested that 26 terms of the original list were redundant and did not need implementation; the domain experts agreed to remove them. Step 4 resulted in many terms being added to DIDEO that help to provide an additional layer of granularity in describing experimental conditions and results, e.g. transfected cultured cells used in metabolism studies and chemical reactions used in measuring enzyme activity. These terms also were integrated into the NaPDI repository. We found DIDEO to provide a sound foundation for semantic representation of PK-NPDI terms, and we have shown the novelty of the project in that DIDEO is the only ontology in which NPDI terms are formally defined.

  16. Development of structured ICD-10 and its application to computer-assisted ICD coding.

    PubMed

    Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko

    2010-01-01

    This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.

  17. The Pitfalls of Thesaurus Ontologization – the Case of the NCI Thesaurus

    PubMed Central

    Schulz, Stefan; Schober, Daniel; Tudose, Ilinca; Stenzhorn, Holger

    2010-01-01

    Thesauri that are “ontologized” into OWL-DL semantics are highly amenable to modeling errors resulting from falsely interpreting existential restrictions. We investigated the OWL-DL representation of the NCI Thesaurus (NCIT) in order to assess the correctness of existential restrictions. A random sample of 354 axioms using the someValuesFrom operator was taken. According to a rating performed by two domain experts, roughly half of these examples, and in consequence more than 76,000 axioms in the OWL-DL version, make incorrect assertions if interpreted according to description logics semantics. These axioms therefore constitute a huge source for unintended models, rendering most logic-based reasoning unreliable. After identifying typical error patterns we discuss some possible improvements. Our recommendation is to either amend the problematic axioms in the OWL-DL formalization or to consider some less strict representational format. PMID:21347074

  18. Using AberOWL for fast and scalable reasoning over BioPortal ontologies.

    PubMed

    Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert

    2016-08-08

    Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.

  19. An epigenetic aging clock for dogs and wolves.

    PubMed

    Thompson, Michael J; vonHoldt, Bridgett; Horvath, Steve; Pellegrini, Matteo

    2017-03-28

    Several articles describe highly accurate age estimation methods based on human DNA-methylation data. It is not yet known whether similar epigenetic aging clocks can be developed based on blood methylation data from canids. Using Reduced Representation Bisulfite Sequencing, we assessed blood DNA-methylation data from 46 domesticated dogs ( Canis familiaris ) and 62 wild gray wolves ( C. lupus ). By regressing chronological dog age on the resulting CpGs, we defined highly accurate multivariate age estimators for dogs (based on 41 CpGs), wolves (67 CpGs), and both combined (115 CpGs). Age related DNA methylation changes in canids implicate similar gene ontology categories as those observed in humans suggesting an evolutionarily conserved mechanism underlying age-related DNA methylation in mammals.

  20. An epigenetic aging clock for dogs and wolves

    PubMed Central

    Thompson, Michael J.; vonHoldt, Bridgett; Horvath, Steve; Pellegrini, Matteo

    2017-01-01

    Several articles describe highly accurate age estimation methods based on human DNA-methylation data. It is not yet known whether similar epigenetic aging clocks can be developed based on blood methylation data from canids. Using Reduced Representation Bisulfite Sequencing, we assessed blood DNA-methylation data from 46 domesticated dogs (Canis familiaris) and 62 wild gray wolves (C. lupus). By regressing chronological dog age on the resulting CpGs, we defined highly accurate multivariate age estimators for dogs (based on 41 CpGs), wolves (67 CpGs), and both combined (115 CpGs). Age related DNA methylation changes in canids implicate similar gene ontology categories as those observed in humans suggesting an evolutionarily conserved mechanism underlying age-related DNA methylation in mammals. PMID:28373601

  1. The NIFSTD and BIRNLex Vocabularies: Building Comprehensive Ontologies for Neuroscience

    PubMed Central

    Bug, William J.; Ascoli, Giorgio A.; Grethe, Jeffrey S.; Gupta, Amarnath; Fennema-Notestine, Christine; Laird, Angela R.; Larson, Stephen D.; Rubin, Daniel; Shepherd, Gordon M.; Turner, Jessica A.; Martone, Maryann E.

    2009-01-01

    A critical component of the Neuroscience Information Framework (NIF) project is a consistent, flexible terminology for describing and retrieving neuroscience-relevant resources. Although the original NIF specification called for a loosely structured controlled vocabulary for describing neuroscience resources, as the NIF system evolved, the requirement for a formally structured ontology for neuroscience with sufficient granularity to describe and access a diverse collection of information became obvious. This requirement led to the NIF standardized (NIFSTD) ontology, a comprehensive collection of common neuroscience domain terminologies woven into an ontologically consistent, unified representation of the biomedical domains typically used to describe neuroscience data (e.g., anatomy, cell types, techniques), as well as digital resources (tools, databases) being created throughout the neuroscience community. NIFSTD builds upon a structure established by the BIRNLex, a lexicon of concepts covering clinical neuroimaging research developed by the Biomedical Informatics Research Network (BIRN) project. Each distinct domain module is represented using the Web Ontology Language (OWL). As much as has been practical, NIFSTD reuses existing community ontologies that cover the required biomedical domains, building the more specific concepts required to annotate NIF resources. By following this principle, an extensive vocabulary was assembled in a relatively short period of time for NIF information annotation, organization, and retrieval, in a form that promotes easy extension and modification. We report here on the structure of the NIFSTD, and its predecessor BIRNLex, the principles followed in its construction and provide examples of its use within NIF. PMID:18975148

  2. The NIFSTD and BIRNLex vocabularies: building comprehensive ontologies for neuroscience.

    PubMed

    Bug, William J; Ascoli, Giorgio A; Grethe, Jeffrey S; Gupta, Amarnath; Fennema-Notestine, Christine; Laird, Angela R; Larson, Stephen D; Rubin, Daniel; Shepherd, Gordon M; Turner, Jessica A; Martone, Maryann E

    2008-09-01

    A critical component of the Neuroscience Information Framework (NIF) project is a consistent, flexible terminology for describing and retrieving neuroscience-relevant resources. Although the original NIF specification called for a loosely structured controlled vocabulary for describing neuroscience resources, as the NIF system evolved, the requirement for a formally structured ontology for neuroscience with sufficient granularity to describe and access a diverse collection of information became obvious. This requirement led to the NIF standardized (NIFSTD) ontology, a comprehensive collection of common neuroscience domain terminologies woven into an ontologically consistent, unified representation of the biomedical domains typically used to describe neuroscience data (e.g., anatomy, cell types, techniques), as well as digital resources (tools, databases) being created throughout the neuroscience community. NIFSTD builds upon a structure established by the BIRNLex, a lexicon of concepts covering clinical neuroimaging research developed by the Biomedical Informatics Research Network (BIRN) project. Each distinct domain module is represented using the Web Ontology Language (OWL). As much as has been practical, NIFSTD reuses existing community ontologies that cover the required biomedical domains, building the more specific concepts required to annotate NIF resources. By following this principle, an extensive vocabulary was assembled in a relatively short period of time for NIF information annotation, organization, and retrieval, in a form that promotes easy extension and modification. We report here on the structure of the NIFSTD, and its predecessor BIRNLex, the principles followed in its construction and provide examples of its use within NIF.

  3. Conservation Process Model (cpm): a Twofold Scientific Research Scope in the Information Modelling for Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Fiorani, D.; Acierno, M.

    2017-05-01

    The aim of the present research is to develop an instrument able to adequately support the conservation process by means of a twofold approach, based on both BIM environment and ontology formalisation. Although BIM has been successfully experimented within AEC (Architecture Engineering Construction) field, it has showed many drawbacks for architectural heritage. To cope with unicity and more generally complexity of ancient buildings, applications so far developed have shown to poorly adapt BIM to conservation design with unsatisfactory results (Dore, Murphy 2013; Carrara 2014). In order to combine achievements reached within AEC through BIM environment (design control and management) with an appropriate, semantically enriched and flexible The presented model has at its core a knowledge base developed through information ontologies and oriented around the formalization and computability of all the knowledge necessary for the full comprehension of the object of architectural heritage an its conservation. Such a knowledge representation is worked out upon conceptual categories defined above all within architectural criticism and conservation scope. The present paper aims at further extending the scope of conceptual modelling within cultural heritage conservation already formalized by the model. A special focus is directed on decay analysis and surfaces conservation project.

  4. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  5. Construction of a Clinical Decision Support System for Undergoing Surgery Based on Domain Ontology and Rules Reasoning

    PubMed Central

    Bau, Cho-Tsan; Huang, Chung-Yi

    2014-01-01

    Abstract Objective: To construct a clinical decision support system (CDSS) for undergoing surgery based on domain ontology and rules reasoning in the setting of hospitalized diabetic patients. Materials and Methods: The ontology was created with a modified ontology development method, including specification and conceptualization, formalization, implementation, and evaluation and maintenance. The Protégé–Web Ontology Language editor was used to implement the ontology. Embedded clinical knowledge was elicited to complement the domain ontology with formal concept analysis. The decision rules were translated into JENA format, which JENA can use to infer recommendations based on patient clinical situations. Results: The ontology includes 31 classes and 13 properties, plus 38 JENA rules that were built to generate recommendations. The evaluation studies confirmed the correctness of the ontology, acceptance of recommendations, satisfaction with the system, and usefulness of the ontology for glycemic management of diabetic patients undergoing surgery, especially for domain experts. Conclusions: The contribution of this research is to set up an evidence-based hybrid ontology and an evaluation method for CDSS. The system can help clinicians to achieve inpatient glycemic control in diabetic patients undergoing surgery while avoiding hypoglycemia. PMID:24730353

  6. Construction of a clinical decision support system for undergoing surgery based on domain ontology and rules reasoning.

    PubMed

    Bau, Cho-Tsan; Chen, Rung-Ching; Huang, Chung-Yi

    2014-05-01

    To construct a clinical decision support system (CDSS) for undergoing surgery based on domain ontology and rules reasoning in the setting of hospitalized diabetic patients. The ontology was created with a modified ontology development method, including specification and conceptualization, formalization, implementation, and evaluation and maintenance. The Protégé-Web Ontology Language editor was used to implement the ontology. Embedded clinical knowledge was elicited to complement the domain ontology with formal concept analysis. The decision rules were translated into JENA format, which JENA can use to infer recommendations based on patient clinical situations. The ontology includes 31 classes and 13 properties, plus 38 JENA rules that were built to generate recommendations. The evaluation studies confirmed the correctness of the ontology, acceptance of recommendations, satisfaction with the system, and usefulness of the ontology for glycemic management of diabetic patients undergoing surgery, especially for domain experts. The contribution of this research is to set up an evidence-based hybrid ontology and an evaluation method for CDSS. The system can help clinicians to achieve inpatient glycemic control in diabetic patients undergoing surgery while avoiding hypoglycemia.

  7. Spatio-structural granularity of biological material entities

    PubMed Central

    2010-01-01

    Background With the continuously increasing demands on knowledge- and data-management that databases have to meet, ontologies and the theories of granularity they use become more and more important. Unfortunately, currently used theories and schemes of granularity unnecessarily limit the performance of ontologies due to two shortcomings: (i) they do not allow the integration of multiple granularity perspectives into one granularity framework; (ii) they are not applicable to cumulative-constitutively organized material entities, which cover most of the biomedical material entities. Results The above mentioned shortcomings are responsible for the major inconsistencies in currently used spatio-structural granularity schemes. By using the Basic Formal Ontology (BFO) as a top-level ontology and Keet's general theory of granularity, a granularity framework is presented that is applicable to cumulative-constitutively organized material entities. It provides a scheme for granulating complex material entities into their constitutive and regional parts by integrating various compositional and spatial granularity perspectives. Within a scale dependent resolution perspective, it even allows distinguishing different types of representations of the same material entity. Within other scale dependent perspectives, which are based on specific types of measurements (e.g. weight, volume, etc.), the possibility of organizing instances of material entities independent of their parthood relations and only according to increasing measures is provided as well. All granularity perspectives are connected to one another through overcrossing granularity levels, together forming an integrated whole that uses the compositional object perspective as an integrating backbone. This granularity framework allows to consistently assign structural granularity values to all different types of material entities. Conclusions The here presented framework provides a spatio-structural granularity framework for all domain reference ontologies that model cumulative-constitutively organized material entities. With its multi-perspectives approach it allows querying an ontology stored in a database at one's own desired different levels of detail: The contents of a database can be organized according to diverse granularity perspectives, which in their turn provide different views on its content (i.e. data, knowledge), each organized into different levels of detail. PMID:20509878

  8. Integration of an OWL-DL knowledge base with an EHR prototype and providing customized information.

    PubMed

    Jing, Xia; Kay, Stephen; Marley, Tom; Hardiker, Nicholas R

    2014-09-01

    When clinicians use electronic health record (EHR) systems, their ability to obtain general knowledge is often an important contribution to their ability to make more informed decisions. In this paper we describe a method by which an external, formal representation of clinical and molecular genetic knowledge can be integrated into an EHR such that customized knowledge can be delivered to clinicians in a context-appropriate manner.Web Ontology Language-Description Logic (OWL-DL) is a formal knowledge representation language that is widely used for creating, organizing and managing biomedical knowledge through the use of explicit definitions, consistent structure and a computer-processable format, particularly in biomedical fields. In this paper we describe: 1) integration of an OWL-DL knowledge base with a standards-based EHR prototype, 2) presentation of customized information from the knowledge base via the EHR interface, and 3) lessons learned via the process. The integration was achieved through a combination of manual and automatic methods. Our method has advantages for scaling up to and maintaining knowledge bases of any size, with the goal of assisting clinicians and other EHR users in making better informed health care decisions.

  9. Supporting spatial data harmonization process with the use of ontologies and Semantic Web technologies

    NASA Astrophysics Data System (ADS)

    Strzelecki, M.; Iwaniak, A.; Łukowicz, J.; Kaczmarek, I.

    2013-10-01

    Nowadays, spatial information is not only used by professionals, but also by common citizens, who uses it for their daily activities. Open Data initiative states that data should be freely and unreservedly available for all users. It also applies to spatial data. As spatial data becomes widely available it is essential to publish it in form which guarantees the possibility of integrating it with other, heterogeneous data sources. Interoperability is the possibility to combine spatial data sets from different sources in a consistent way as well as providing access to it. Providing syntactic interoperability based on well-known data formats is relatively simple, unlike providing semantic interoperability, due to the multiple possible data interpretation. One of the issues connected with the problem of achieving interoperability is data harmonization. It is a process of providing access to spatial data in a representation that allows combining it with other harmonized data in a coherent way by using a common set of data product specification. Spatial data harmonization is performed by creating definition of reclassification and transformation rules (mapping schema) for source application schema. Creation of those rules is a very demanding task which requires wide domain knowledge and a detailed look into application schemas. The paper focuses on proposing methods for supporting data harmonization process, by automated or supervised creation of mapping schemas with the use of ontologies, ontology matching methods and Semantic Web technologies.

  10. Margin based ontology sparse vector learning algorithm and applied in biology science.

    PubMed

    Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad

    2017-01-01

    In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.

  11. MELLO: Medical lifelog ontology for data terms from self-tracking and lifelog devices.

    PubMed

    Kim, Hye Hyeon; Lee, Soo Youn; Baik, Su Youn; Kim, Ju Han

    2015-12-01

    The increasing use of health self-tracking devices is making the integration of heterogeneous data and shared decision-making more challenging. Computational analysis of lifelog data has been hampered by the lack of semantic and syntactic consistency among lifelog terms and related ontologies. Medical lifelog ontology (MELLO) was developed by identifying lifelog concepts and relationships between concepts, and it provides clear definitions by following ontology development methods. MELLO aims to support the classification and semantic mapping of lifelog data from diverse health self-tracking devices. MELLO was developed using the General Formal Ontology method with a manual iterative process comprising five steps: (1) defining the scope of lifelog data, (2) identifying lifelog concepts, (3) assigning relationships among MELLO concepts, (4) developing MELLO properties (e.g., synonyms, preferred terms, and definitions) for each MELLO concept, and (5) evaluating representative layers of the ontology content. An evaluation was performed by classifying 11 devices into 3 classes by subjects, and performing pairwise comparisons of lifelog terms among 5 devices in each class as measured using the Jaccard similarity index. MELLO represents a comprehensive knowledge base of 1998 lifelog concepts, with 4996 synonyms for 1211 (61%) concepts and 1395 definitions for 926 (46%) concepts. The MELLO Browser and MELLO Mapper provide convenient access and annotating non-standard proprietary terms with MELLO (http://mello.snubi.org/). MELLO covers 88.1% of lifelog terms from 11 health self-tracking devices and uses simple string matching to match semantically similar terms provided by various devices that are not yet integrated. The results from the comparisons of Jaccard similarities between simple string matching and MELLO matching revealed increases of 2.5, 2.2, and 5.7 folds for physical activity,body measure, and sleep classes, respectively. MELLO is the first ontology for representing health-related lifelog data with rich contents including definitions, synonyms, and semantic relationships. MELLO fills the semantic gap between heterogeneous lifelog terms that are generated by diverse health self-tracking devices. The unified representation of lifelog terms facilitated by MELLO can help describe an individual's lifestyle and environmental factors, which can be included with user-generated data for clinical research and thereby enhance data integration and sharing. Copyright © 2015. Published by Elsevier Ireland Ltd.

  12. Scientific Reproducibility in Biomedical Research: Provenance Metadata Ontology for Semantic Annotation of Study Description.

    PubMed

    Sahoo, Satya S; Valdez, Joshua; Rueschman, Michael

    2016-01-01

    Scientific reproducibility is key to scientific progress as it allows the research community to build on validated results, protect patients from potentially harmful trial drugs derived from incorrect results, and reduce wastage of valuable resources. The National Institutes of Health (NIH) recently published a systematic guideline titled "Rigor and Reproducibility " for supporting reproducible research studies, which has also been accepted by several scientific journals. These journals will require published articles to conform to these new guidelines. Provenance metadata describes the history or origin of data and it has been long used in computer science to capture metadata information for ensuring data quality and supporting scientific reproducibility. In this paper, we describe the development of Provenance for Clinical and healthcare Research (ProvCaRe) framework together with a provenance ontology to support scientific reproducibility by formally modeling a core set of data elements representing details of research study. We extend the PROV Ontology (PROV-O), which has been recommended as the provenance representation model by World Wide Web Consortium (W3C), to represent both: (a) data provenance, and (b) process provenance. We use 124 study variables from 6 clinical research studies from the National Sleep Research Resource (NSRR) to evaluate the coverage of the provenance ontology. NSRR is the largest repository of NIH-funded sleep datasets with 50,000 studies from 36,000 participants. The provenance ontology reuses ontology concepts from existing biomedical ontologies, for example the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), to model the provenance information of research studies. The ProvCaRe framework is being developed as part of the Big Data to Knowledge (BD2K) data provenance project.

  13. Scientific Reproducibility in Biomedical Research: Provenance Metadata Ontology for Semantic Annotation of Study Description

    PubMed Central

    Sahoo, Satya S.; Valdez, Joshua; Rueschman, Michael

    2016-01-01

    Scientific reproducibility is key to scientific progress as it allows the research community to build on validated results, protect patients from potentially harmful trial drugs derived from incorrect results, and reduce wastage of valuable resources. The National Institutes of Health (NIH) recently published a systematic guideline titled “Rigor and Reproducibility “ for supporting reproducible research studies, which has also been accepted by several scientific journals. These journals will require published articles to conform to these new guidelines. Provenance metadata describes the history or origin of data and it has been long used in computer science to capture metadata information for ensuring data quality and supporting scientific reproducibility. In this paper, we describe the development of Provenance for Clinical and healthcare Research (ProvCaRe) framework together with a provenance ontology to support scientific reproducibility by formally modeling a core set of data elements representing details of research study. We extend the PROV Ontology (PROV-O), which has been recommended as the provenance representation model by World Wide Web Consortium (W3C), to represent both: (a) data provenance, and (b) process provenance. We use 124 study variables from 6 clinical research studies from the National Sleep Research Resource (NSRR) to evaluate the coverage of the provenance ontology. NSRR is the largest repository of NIH-funded sleep datasets with 50,000 studies from 36,000 participants. The provenance ontology reuses ontology concepts from existing biomedical ontologies, for example the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), to model the provenance information of research studies. The ProvCaRe framework is being developed as part of the Big Data to Knowledge (BD2K) data provenance project. PMID:28269904

  14. The Plant Structure Ontology, a Unified Vocabulary of Anatomy and Morphology of a Flowering Plant1[W][OA

    PubMed Central

    Ilic, Katica; Kellogg, Elizabeth A.; Jaiswal, Pankaj; Zapata, Felipe; Stevens, Peter F.; Vincent, Leszek P.; Avraham, Shulamit; Reiser, Leonore; Pujar, Anuradha; Sachs, Martin M.; Whitman, Noah T.; McCouch, Susan R.; Schaeffer, Mary L.; Ware, Doreen H.; Stein, Lincoln D.; Rhee, Seung Y.

    2007-01-01

    Formal description of plant phenotypes and standardized annotation of gene expression and protein localization data require uniform terminology that accurately describes plant anatomy and morphology. This facilitates cross species comparative studies and quantitative comparison of phenotypes and expression patterns. A major drawback is variable terminology that is used to describe plant anatomy and morphology in publications and genomic databases for different species. The same terms are sometimes applied to different plant structures in different taxonomic groups. Conversely, similar structures are named by their species-specific terms. To address this problem, we created the Plant Structure Ontology (PSO), the first generic ontological representation of anatomy and morphology of a flowering plant. The PSO is intended for a broad plant research community, including bench scientists, curators in genomic databases, and bioinformaticians. The initial releases of the PSO integrated existing ontologies for Arabidopsis (Arabidopsis thaliana), maize (Zea mays), and rice (Oryza sativa); more recent versions of the ontology encompass terms relevant to Fabaceae, Solanaceae, additional cereal crops, and poplar (Populus spp.). Databases such as The Arabidopsis Information Resource, Nottingham Arabidopsis Stock Centre, Gramene, MaizeGDB, and SOL Genomics Network are using the PSO to describe expression patterns of genes and phenotypes of mutants and natural variants and are regularly contributing new annotations to the Plant Ontology database. The PSO is also used in specialized public databases, such as BRENDA, GENEVESTIGATOR, NASCArrays, and others. Over 10,000 gene annotations and phenotype descriptions from participating databases can be queried and retrieved using the Plant Ontology browser. The PSO, as well as contributed gene associations, can be obtained at www.plantontology.org. PMID:17142475

  15. Matching biomedical ontologies based on formal concept analysis.

    PubMed

    Zhao, Mengyi; Zhang, Songmao; Li, Weizhuo; Chen, Guowei

    2018-03-19

    The goal of ontology matching is to identify correspondences between entities from different yet overlapping ontologies so as to facilitate semantic integration, reuse and interoperability. As a well developed mathematical model for analyzing individuals and structuring concepts, Formal Concept Analysis (FCA) has been applied to ontology matching (OM) tasks since the beginning of OM research, whereas ontological knowledge exploited in FCA-based methods is limited. This motivates the study in this paper, i.e., to empower FCA with as much as ontological knowledge as possible for identifying mappings across ontologies. We propose a method based on Formal Concept Analysis to identify and validate mappings across ontologies, including one-to-one mappings, complex mappings and correspondences between object properties. Our method, called FCA-Map, incrementally generates a total of five types of formal contexts and extracts mappings from the lattices derived. First, the token-based formal context describes how class names, labels and synonyms share lexical tokens, leading to lexical mappings (anchors) across ontologies. Second, the relation-based formal context describes how classes are in taxonomic, partonomic and disjoint relationships with the anchors, leading to positive and negative structural evidence for validating the lexical matching. Third, the positive relation-based context can be used to discover structural mappings. Afterwards, the property-based formal context describes how object properties are used in axioms to connect anchor classes across ontologies, leading to property mappings. Last, the restriction-based formal context describes co-occurrence of classes across ontologies in anonymous ancestors of anchors, from which extended structural mappings and complex mappings can be identified. Evaluation on the Anatomy, the Large Biomedical Ontologies, and the Disease and Phenotype track of the 2016 Ontology Alignment Evaluation Initiative campaign demonstrates the effectiveness of FCA-Map and its competitiveness with the top-ranked systems. FCA-Map can achieve a better balance between precision and recall for large-scale domain ontologies through constructing multiple FCA structures, whereas it performs unsatisfactorily for smaller-sized ontologies with less lexical and semantic expressions. Compared with other FCA-based OM systems, the study in this paper is more comprehensive as an attempt to push the envelope of the Formal Concept Analysis formalism in ontology matching tasks. Five types of formal contexts are constructed incrementally, and their derived concept lattices are used to cluster the commonalities among classes at lexical and structural level, respectively. Experiments on large, real-world domain ontologies show promising results and reveal the power of FCA.

  16. Semantic Support for Complex Ecosystem Research Environments

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; McGuinness, D. L.; Pinheiro, P.; Santos, H. O.; Chastain, K.

    2015-12-01

    As ecosystems come under increasing stresses from diverse sources, there is growing interest in research efforts aimed at monitoring, modeling, and improving understanding of ecosystems and protection options. We aimed to provide a semantic infrastructure capable of representing data initially related to one large aquatic ecosystem research effort - the Jefferson project at Lake George. This effort includes significant historical observational data, extensive sensor-based monitoring data, experimental data, as well as model and simulation data covering topics including lake circulation, watershed runoff, lake biome food webs, etc. The initial measurement representation has been centered on monitoring data and related provenance. We developed a human-aware sensor network ontology (HASNetO) that leverages existing ontologies (PROV-O, OBOE, VSTO*) in support of measurement annotations. We explicitly support the human-aware aspects of human sensor deployment and collection activity to help capture key provenance that often is lacking. Our foundational ontology has since been generalized into a family of ontologies and used to create our human-aware data collection infrastructure that now supports the integration of measurement data along with simulation data. Interestingly, we have also utilized the same infrastructure to work with partners who have some more specific needs for specifying the environmental conditions where measurements occur, for example, knowing that an air temperature is not an external air temperature, but of the air temperature when windows are shut and curtains are open. We have also leveraged the same infrastructure to work with partners more interested in modeling smart cities with data feeds more related to people, mobility, environment, and living. We will introduce our human-aware data collection infrastructure, and demonstrate how it uses HASNetO and its supporting SOLR-based search platform to support data integration and semantic browsing. Further we will present learnings from its use in three relatively diverse large ecosystem research efforts and highlight some benefits and challenges related to our semantically-enhanced foundation.

  17. Ontological Reset

    ERIC Educational Resources Information Center

    Klein, William Francis

    2013-01-01

    We now exist in an era where for the first time, symbolic representation may precede the natural world in human experience. Aristotle's a priori terrain has been upstaged. We are changing the nature of our own evolution in profound ways. The question becomes, "How do we transcend the chaos of meaning in this post-modern condition or…

  18. "Comments on Coulter and Smith": Relational Ontological Commitments in Narrative Research

    ERIC Educational Resources Information Center

    Clandinin, D. Jean; Murphy, M. Shaun

    2009-01-01

    In this comment article on Coulter and Smith (2009), the authors raise concerns that focusing exclusively on issues of representation may lead readers to misunderstandings about narrative research. The authors argue that narrative ways of thinking about the phenomena under study are interwoven with narrative research methodologies. Drawing on…

  19. Theory Matters: Representation and Experimentation in Education

    ERIC Educational Resources Information Center

    Edwards, Richard

    2012-01-01

    This article provides a material enactment of educational theory to explore how we might do educational theory differently by defamiliarising the familiar. Theory is often assumed to be abstract, located solely in the realm of ideas and separate from practice. However, this view of theory emerges from a set of ontological and epistemological…

  20. Practical Experiences for the Development of Educational Systems in the Semantic Web

    ERIC Educational Resources Information Center

    Sánchez Vera, Ma. del Mar; Tomás Fernández Breis, Jesualdo; Serrano Sánchez, José Luis; Prendes Espinosa, Ma. Paz

    2013-01-01

    Semantic Web technologies have been applied in educational settings for different purposes in recent years, with the type of application being mainly defined by the way in which knowledge is represented and exploited. The basic technology for knowledge representation in Semantic Web settings is the ontology, which represents a common, shareable…

  1. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    PubMed

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  2. Querying archetype-based EHRs by search ontology-based XPath engineering.

    PubMed

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  3. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    PubMed Central

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  4. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    PubMed

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  5. Ontology-Based Retrieval of Spatially Related Objects for Location Based Services

    NASA Astrophysics Data System (ADS)

    Haav, Hele-Mai; Kaljuvee, Aivi; Luts, Martin; Vajakas, Toivo

    Advanced Location Based Service (LBS) applications have to integrate information stored in GIS, information about users' preferences (profile) as well as contextual information and information about application itself. Ontology engineering provides methods to semantically integrate several data sources. We propose an ontology-driven LBS development framework: the paper describes the architecture of ontologies and their usage for retrieval of spatially related objects relevant to the user. Our main contribution is to enable personalised ontology driven LBS by providing a novel approach for defining personalised semantic spatial relationships by means of ontologies. The approach is illustrated by an industrial case study.

  6. A UML profile for the OBO relation ontology

    PubMed Central

    2012-01-01

    Background Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions The use of an established and well-known graphical language in the development of biomedical ontologies provides a more intuitive form of capturing and representing knowledge than using only text-based notations. The use of the profile requires the domain expert to reason about the underlying semantics of the concepts and relationships being modeled, which helps preventing the introduction of inconsistencies in an ontology under development and facilitates the identification and correction of errors in an already defined ontology. PMID:23095840

  7. Controlled Vocabulary Service Application for Environmental Data Store

    NASA Astrophysics Data System (ADS)

    Ji, P.; Piasecki, M.; Lovell, R.

    2013-12-01

    In this paper we present a controlled vocabulary service application for Environmental Data Store (EDS). The purpose for such application is to help researchers and investigators to archive, manage, share, search, and retrieve data efficiently in EDS. The Simple Knowledge Organization System (SKOS) is used in the application for the representation of the controlled vocabularies coming from EDS. The controlled vocabularies of EDS are created by collecting, comparing, choosing and merging controlled vocabularies, taxonomies and ontologies widely used and recognized in geoscience/environmental informatics community, such as Environment ontology (EnvO), Semantic Web for Earth and Environmental Terminology (SWEET) ontology, CUAHSI Hydrologic Ontology and ODM Controlled Vocabulary, National Environmental Methods Index (NEMI), National Water Information System (NWIS) codes, EPSG Geodetic Parameter Data Set, WQX domain value etc. TemaTres, an open-source, web -based thesaurus management package is employed and extended to create and manage controlled vocabularies of EDS in the application. TemaTresView and VisualVocabulary that work well with TemaTres, are also integrated in the application to provide tree view and graphical view of the structure of vocabularies. The Open Source Edition of Virtuoso Universal Server is set up to provide a Web interface to make SPARQL queries against controlled vocabularies hosted on the Environmental Data Store. The replicas of some of the key vocabularies commonly used in the community, are also maintained as part of the application, such as General Multilingual Environmental Thesaurus (GEMET), NetCDF Climate and Forecast (CF) Standard Names, etc.. The application has now been deployed as an elementary and experimental prototype that provides management, search and download controlled vocabularies of EDS under SKOS framework.

  8. Hum-mPLoc 3.0: prediction enhancement of human protein subcellular localization through modeling the hidden correlations of gene ontology and functional domain features.

    PubMed

    Zhou, Hang; Yang, Yang; Shen, Hong-Bin

    2017-03-15

    Protein subcellular localization prediction has been an important research topic in computational biology over the last decade. Various automatic methods have been proposed to predict locations for large scale protein datasets, where statistical machine learning algorithms are widely used for model construction. A key step in these predictors is encoding the amino acid sequences into feature vectors. Many studies have shown that features extracted from biological domains, such as gene ontology and functional domains, can be very useful for improving the prediction accuracy. However, domain knowledge usually results in redundant features and high-dimensional feature spaces, which may degenerate the performance of machine learning models. In this paper, we propose a new amino acid sequence-based human protein subcellular location prediction approach Hum-mPLoc 3.0, which covers 12 human subcellular localizations. The sequences are represented by multi-view complementary features, i.e. context vocabulary annotation-based gene ontology (GO) terms, peptide-based functional domains, and residue-based statistical features. To systematically reflect the structural hierarchy of the domain knowledge bases, we propose a novel feature representation protocol denoted as HCM (Hidden Correlation Modeling), which will create more compact and discriminative feature vectors by modeling the hidden correlations between annotation terms. Experimental results on four benchmark datasets show that HCM improves prediction accuracy by 5-11% and F 1 by 8-19% compared with conventional GO-based methods. A large-scale application of Hum-mPLoc 3.0 on the whole human proteome reveals proteins co-localization preferences in the cell. www.csbio.sjtu.edu.cn/bioinf/Hum-mPLoc3/. hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. [Implementation of ontology-based clinical decision support system for management of interactions between antihypertensive drugs and diet].

    PubMed

    Park, Jeong Eun; Kim, Hwa Sun; Chang, Min Jung; Hong, Hae Sook

    2014-06-01

    The influence of dietary composition on blood pressure is an important subject in healthcare. Interactions between antihypertensive drugs and diet (IBADD) is the most important factor in the management of hypertension. It is therefore essential to support healthcare providers' decision making role in active and continuous interaction control in hypertension management. The aim of this study was to implement an ontology-based clinical decision support system (CDSS) for IBADD management (IBADDM). We considered the concepts of antihypertensive drugs and foods, and focused on the interchangeability between the database and the CDSS when providing tailored information. An ontology-based CDSS for IBADDM was implemented in eight phases: (1) determining the domain and scope of ontology, (2) reviewing existing ontology, (3) extracting and defining the concepts, (4) assigning relationships between concepts, (5) creating a conceptual map with CmapTools, (6) selecting upper ontology, (7) formally representing the ontology with Protégé (ver.4.3), (8) implementing an ontology-based CDSS as a JAVA prototype application. We extracted 5,926 concepts, 15 properties, and formally represented them using Protégé. An ontology-based CDSS for IBADDM was implemented and the evaluation score was 4.60 out of 5. We endeavored to map functions of a CDSS and implement an ontology-based CDSS for IBADDM.

  10. Formalizing the Austrian Procedure Catalogue: A 4-step methodological analysis approach.

    PubMed

    Neururer, Sabrina Barbara; Lasierra, Nelia; Peiffer, Karl Peter; Fensel, Dieter

    2016-04-01

    Due to the lack of an internationally accepted and adopted standard for coding health interventions, Austria has established its own country-specific procedure classification system - the Austrian Procedure Catalogue (APC). Even though the APC is an elaborate coding standard for medical procedures, it has shortcomings that limit its usability. In order to enhance usability and usefulness, especially for research purposes and e-health applications, we developed an ontologized version of the APC. In this paper we present a novel four-step approach for the ontology engineering process, which enables accurate extraction of relevant concepts for medical ontologies from written text. The proposed approach for formalizing the APC consists of the following four steps: (1) comparative pre-analysis, (2) definition analysis, (3) typological analysis, and (4) ontology implementation. The first step contained a comparison of the APC to other well-established or elaborate health intervention coding systems in order to identify strengths and weaknesses of the APC. In the second step, a list of definitions of medical terminology used in the APC was obtained. This list of definitions was used as input for Step 3, in which we identified the most important concepts to describe medical procedures using the qualitative typological analysis approach. The definition analysis as well as the typological analysis are well-known and effective methods used in social sciences, but not commonly employed in the computer science or ontology engineering domain. Finally, this list of concepts was used in Step 4 to formalize the APC. The pre-analysis highlighted the major shortcomings of the APC, such as the lack of formal definition, leading to implicitly available, but not directly accessible information (hidden data), or the poor procedural type classification. After performing the definition and subsequent typological analyses, we were able to identify the following main characteristics of health interventions: (1) Procedural type, (2) Anatomical site, (3) Medical device, (4) Pathology, (5) Access, (6) Body system, (7) Population, (8) Aim, (9) Discipline, (10) Technique, and (11) Body Function. These main characteristics were taken as input of classes for the formalization of the APC. We were also able to identify relevant relations between classes. The proposed four-step approach for formalizing the APC provides a novel, systematically developed, strong framework to semantically enrich procedure classifications. Although this methodology was designed to address the particularities of the APC, the included methods are based on generic analysis tasks, and therefore can be re-used to provide a systematic representation of other procedure catalogs or classification systems and hence contribute towards a universal alignment of such representations, if desired. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  12. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  13. OBO to UML: Support for the development of conceptual models in the biomedical domain.

    PubMed

    Waldemarin, Ricardo C; de Farias, Cléver R G

    2018-04-01

    A conceptual model abstractly defines a number of concepts and their relationships for the purposes of understanding and communication. Once a conceptual model is available, it can also be used as a starting point for the development of a software system. The development of conceptual models using the Unified Modeling Language (UML) facilitates the representation of modeled concepts and allows software developers to directly reuse these concepts in the design of a software system. The OBO Foundry represents the most relevant collaborative effort towards the development of ontologies in the biomedical domain. The development of UML conceptual models in the biomedical domain may benefit from the use of domain-specific semantics and notation. Further, the development of these models may also benefit from the reuse of knowledge contained in OBO ontologies. This paper investigates the support for the development of conceptual models in the biomedical domain using UML as a conceptual modeling language and using the support provided by the OBO Foundry for the development of biomedical ontologies, namely entity kind and relationship types definitions provided by the Basic Formal Ontology (BFO) and the OBO Core Relations Ontology (OBO Core), respectively. Further, the paper investigates the support for the reuse of biomedical knowledge currently available in OBOFFF ontologies in the development these conceptual models. The paper describes a UML profile for the OBO Core Relations Ontology, which basically defines a number of stereotypes to represent BFO entity kinds and OBO Core relationship types definitions. The paper also presents a support toolset consisting of a graphical editor named OBO-RO Editor, which directly supports the development of UML models using the extensions defined by our profile, and a command-line tool named OBO2UML, which directly converts an OBOFFF ontology into a UML model. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Learning Resources Organization Using Ontological Framework

    NASA Astrophysics Data System (ADS)

    Gavrilova, Tatiana; Gorovoy, Vladimir; Petrashen, Elena

    The paper describes the ontological approach to the knowledge structuring for the e-learning portal design as it turns out to be efficient and relevant to current domain conditions. It is primarily based on the visual ontology-based description of the content of the learning materials and this helps to provide productive and personalized access to these materials. The experience of ontology developing for Knowledge Engineering coursetersburg State University is discussed and “OntolingeWiki” tool for creating ontology-based e-learning portals is described.

  15. Supporting the analysis of ontology evolution processes through the combination of static and dynamic scaling functions in OQuaRE.

    PubMed

    Duque-Ramos, Astrid; Quesada-Martínez, Manuel; Iniesta-Moreno, Miguela; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2016-10-17

    The biomedical community has now developed a significant number of ontologies. The curation of biomedical ontologies is a complex task and biomedical ontologies evolve rapidly, so new versions are regularly and frequently published in ontology repositories. This has the implication of there being a high number of ontology versions over a short time span. Given this level of activity, ontology designers need to be supported in the effective management of the evolution of biomedical ontologies as the different changes may affect the engineering and quality of the ontology. This is why there is a need for methods that contribute to the analysis of the effects of changes and evolution of ontologies. In this paper we approach this issue from the ontology quality perspective. In previous work we have developed an ontology evaluation framework based on quantitative metrics, called OQuaRE. Here, OQuaRE is used as a core component in a method that enables the analysis of the different versions of biomedical ontologies using the quality dimensions included in OQuaRE. Moreover, we describe and use two scales for evaluating the changes between the versions of a given ontology. The first one is the static scale used in OQuaRE and the second one is a new, dynamic scale, based on the observed values of the quality metrics of a corpus defined by all the versions of a given ontology (life-cycle). In this work we explain how OQuaRE can be adapted for understanding the evolution of ontologies. Its use has been illustrated with the ontology of bioinformatics operations, types of data, formats, and topics (EDAM). The two scales included in OQuaRE provide complementary information about the evolution of the ontologies. The application of the static scale, which is the original OQuaRE scale, to the versions of the EDAM ontology reveals a design based on good ontological engineering principles. The application of the dynamic scale has enabled a more detailed analysis of the evolution of the ontology, measured through differences between versions. The statistics of change based on the OQuaRE quality scores make possible to identify key versions where some changes in the engineering of the ontology triggered a change from the OQuaRE quality perspective. In the case of the EDAM, this study let us to identify that the fifth version of the ontology has the largest impact in the quality metrics of the ontology, when comparative analyses between the pairs of consecutive versions are performed.

  16. Two Models for Semi-Supervised Terrorist Group Detection

    NASA Astrophysics Data System (ADS)

    Ozgul, Fatih; Erdem, Zeki; Bowerman, Chris

    Since discovery of organization structure of offender groups leads the investigation to terrorist cells or organized crime groups, detecting covert networks from crime data are important to crime investigation. Two models, GDM and OGDM, which are based on another representation model - OGRM are developed and tested on nine terrorist groups. GDM, which is basically depending on police arrest data and “caught together” information and OGDM, which uses a feature matching on year-wise offender components from arrest and demographics data, performed well on terrorist groups, but OGDM produced high precision with low recall values. OGDM uses a terror crime modus operandi ontology which enabled matching of similar crimes.

  17. Informatics in radiology: an information model of the DICOM standard.

    PubMed

    Kahn, Charles E; Langlotz, Curtis P; Channin, David S; Rubin, Daniel L

    2011-01-01

    The Digital Imaging and Communications in Medicine (DICOM) Standard is a key foundational technology for radiology. However, its complexity creates challenges for information system developers because the current DICOM specification requires human interpretation and is subject to nonstandard implementation. To address this problem, a formally sound and computationally accessible information model of the DICOM Standard was created. The DICOM Standard was modeled as an ontology, a machine-accessible and human-interpretable representation that may be viewed and manipulated by information-modeling tools. The DICOM Ontology includes a real-world model and a DICOM entity model. The real-world model describes patients, studies, images, and other features of medical imaging. The DICOM entity model describes connections between real-world entities and the classes that model the corresponding DICOM information entities. The DICOM Ontology was created to support the Cancer Biomedical Informatics Grid (caBIG) initiative, and it may be extended to encompass the entire DICOM Standard and serve as a foundation of medical imaging systems for research and patient care. RSNA, 2010

  18. Encoding of Fundamental Chemical Entities of Organic Reactivity Interest using chemical ontology and XML.

    PubMed

    Durairaj, Vijayasarathi; Punnaivanam, Sankar

    2015-09-01

    Fundamental chemical entities are identified in the context of organic reactivity and classified as appropriate concept classes namely ElectronEntity, AtomEntity, AtomGroupEntity, FunctionalGroupEntity and MolecularEntity. The entity classes and their subclasses are organized into a chemical ontology named "ChemEnt" for the purpose of assertion, restriction and modification of properties through entity relations. Individual instances of entity classes are defined and encoded as a library of chemical entities in XML. The instances of entity classes are distinguished with a unique notation and identification values in order to map them with the ontology definitions. A model GUI named Entity Table is created to view graphical representations of all the entity instances. The detection of chemical entities in chemical structures is achieved through suitable algorithms. The possibility of asserting properties to the entities at different levels and the mechanism of property flow within the hierarchical entity levels is outlined. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A methodology for extending domain coverage in SemRep.

    PubMed

    Rosemblat, Graciela; Shin, Dongwook; Kilicoglu, Halil; Sneiderman, Charles; Rindflesch, Thomas C

    2013-12-01

    We describe a domain-independent methodology to extend SemRep coverage beyond the biomedical domain. SemRep, a natural language processing application originally designed for biomedical texts, uses the knowledge sources provided by the Unified Medical Language System (UMLS©). Ontological and terminological extensions to the system are needed in order to support other areas of knowledge. We extended SemRep's application by developing a semantic representation of a previously unsupported domain. This was achieved by adapting well-known ontology engineering phases and integrating them with the UMLS knowledge sources on which SemRep crucially depends. While the process to extend SemRep coverage has been successfully applied in earlier projects, this paper presents in detail the step-wise approach we followed and the mechanisms implemented. A case study in the field of medical informatics illustrates how the ontology engineering phases have been adapted for optimal integration with the UMLS. We provide qualitative and quantitative results, which indicate the validity and usefulness of our methodology. Published by Elsevier Inc.

  20. A four stage approach for ontology-based health information system design.

    PubMed

    Kuziemsky, Craig E; Lau, Francis

    2010-11-01

    To describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS). A hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool. An empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing. The four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. DMTO: a realistic ontology for standard diabetes mellitus treatment.

    PubMed

    El-Sappagh, Shaker; Kwak, Daehan; Ali, Farman; Kwak, Kyung-Sup

    2018-02-06

    Treatment of type 2 diabetes mellitus (T2DM) is a complex problem. A clinical decision support system (CDSS) based on massive and distributed electronic health record data can facilitate the automation of this process and enhance its accuracy. The most important component of any CDSS is its knowledge base. This knowledge base can be formulated using ontologies. The formal description logic of ontology supports the inference of hidden knowledge. Building a complete, coherent, consistent, interoperable, and sharable ontology is a challenge. This paper introduces the first version of the newly constructed Diabetes Mellitus Treatment Ontology (DMTO) as a basis for shared-semantics, domain-specific, standard, machine-readable, and interoperable knowledge relevant to T2DM treatment. It is a comprehensive ontology and provides the highest coverage and the most complete picture of coded knowledge about T2DM patients' current conditions, previous profiles, and T2DM-related aspects, including complications, symptoms, lab tests, interactions, treatment plan (TP) frameworks, and glucose-related diseases and medications. It adheres to the design principles recommended by the Open Biomedical Ontologies Foundry and is based on ontological realism that follows the principles of the Basic Formal Ontology and the Ontology for General Medical Science. DMTO is implemented under Protégé 5.0 in Web Ontology Language (OWL) 2 format and is publicly available through the National Center for Biomedical Ontology's BioPortal at http://bioportal.bioontology.org/ontologies/DMTO . The current version of DMTO includes more than 10,700 classes, 277 relations, 39,425 annotations, 214 semantic rules, and 62,974 axioms. We provide proof of concept for this approach to modeling TPs. The ontology is able to collect and analyze most features of T2DM as well as customize chronic TPs with the most appropriate drugs, foods, and physical exercises. DMTO is ready to be used as a knowledge base for semantically intelligent and distributed CDSS systems.

  2. Combining clinical and genomics queries using i2b2 – Three methods

    PubMed Central

    Murphy, Shawn N.; Avillach, Paul; Bellazzi, Riccardo; Phillips, Lori; Gabetta, Matteo; Eran, Alal; McDuffie, Michael T.; Kohane, Isaac S.

    2017-01-01

    We are fortunate to be living in an era of twin biomedical data surges: a burgeoning representation of human phenotypes in the medical records of our healthcare systems, and high-throughput sequencing making rapid technological advances. The difficulty representing genomic data and its annotations has almost by itself led to the recognition of a biomedical “Big Data” challenge, and the complexity of healthcare data only compounds the problem to the point that coherent representation of both systems on the same platform seems insuperably difficult. We investigated the capability for complex, integrative genomic and clinical queries to be supported in the Informatics for Integrating Biology and the Bedside (i2b2) translational software package. Three different data integration approaches were developed: The first is based on Sequence Ontology, the second is based on the tranSMART engine, and the third on CouchDB. These novel methods for representing and querying complex genomic and clinical data on the i2b2 platform are available today for advancing precision medicine. PMID:28388645

  3. An Ontological Representation of Learning Objects and Learning Designs as Codified Knowledge

    ERIC Educational Resources Information Center

    Sanchez-Alonso, Salvador; Frosch-Wilke, Dirk

    2005-01-01

    Purpose: The aim of this paper is discussing about the similarities between the life cycle of knowledge management and the processes in which learning objects are created, evaluated and used. Design/methodology/approach: The paper describes LO and learning designs and depicts their integration into the knowledge life cycle (KLC) of the KMCI,…

  4. A Spoken Dialogue System for Command and Control

    DTIC Science & Technology

    2012-10-01

    Previous work in this domain focused on the formal representation of linguistic concepts in ontologies for data integration. His doctoral...20 2.8 Ongoing and Future Work .................................................................................... 20 2.8.1 Dynamic... work , we developed grammars with broader coverage for the domain of Livespace room-control. The goal was to provide commands and queries to be

  5. Knowledge Representation and Data Mining of Neuronal Morphologies Using Neuroinformatics Tools and Formal Ontologies

    ERIC Educational Resources Information Center

    Polavaram, Sridevi

    2016-01-01

    Neuroscience can greatly benefit from using novel methods in computer science and informatics, which enable knowledge discovery in unexpected ways. Currently one of the biggest challenges in Neuroscience is to map the functional circuitry of the brain. The applications of this goal range from understanding structural reorganization of neurons to…

  6. Adaptive Semantic and Social Web-based learning and assessment environment for the STEM

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar

    2014-05-01

    We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the student), pedagogy ontology, and learner ontology (defines time constraint, comment, profile).

  7. An Ontology for Software Engineering Education

    ERIC Educational Resources Information Center

    Ling, Thong Chee; Jusoh, Yusmadi Yah; Adbullah, Rusli; Alwi, Nor Hayati

    2013-01-01

    Software agents communicate using ontology. It is important to build an ontology for specific domain such as Software Engineering Education. Building an ontology from scratch is not only hard, but also incur much time and cost. This study aims to propose an ontology through adaptation of the existing ontology which is originally built based on a…

  8. Design and Implementation of Hydrologic Process Knowledge-base Ontology: A case study for the Infiltration Process

    NASA Astrophysics Data System (ADS)

    Elag, M.; Goodall, J. L.

    2013-12-01

    Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.

  9. Knowledge bases built on web languages from the point of view of predicate logics

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Lukasová, Alena; Žáček, Martin

    2017-06-01

    The article undergoes evaluation of formal systems created on the base of web (ontology/concept) languages by simplifying the usual approach of knowledge representation within the FOPL, but sharing its expressiveness, semantic correct-ness, completeness and decidability. Evaluation of two of them - that one based on description logic and that one built on RDF model principles - identifies some of the lacks of those formal systems and presents, if possible, corrections of them. Possibilities to build an inference system capable to obtain new further knowledge over given knowledge bases including those describing domains by giant linked domain databases has been taken into account. Moreover, the directions towards simplifying FOPL language discussed here has been evaluated from the point of view of a possibility to become a web language for fulfilling an idea of semantic web.

  10. An approach to development of ontological knowledge base in the field of scientific and research activity in Russia

    NASA Astrophysics Data System (ADS)

    Murtazina, M. Sh; Avdeenko, T. V.

    2018-05-01

    The state of art and the progress in application of semantic technologies in the field of scientific and research activity have been analyzed. Even elementary empirical comparison has shown that the semantic search engines are superior in all respects to conventional search technologies. However, semantic information technologies are insufficiently used in the field of scientific and research activity in Russia. In present paper an approach to construction of ontological model of knowledge base is proposed. The ontological model is based on the upper-level ontology and the RDF mechanism for linking several domain ontologies. The ontological model is implemented in the Protégé environment.

  11. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    PubMed

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following design patterns. http://ontorat.hegroup.org/.

  12. GOMMA: a component-based infrastructure for managing and analyzing life science ontologies and their evolution

    PubMed Central

    2011-01-01

    Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205

  13. A methodological approach for designing a usable ontology-based GUI in healthcare.

    PubMed

    Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J

    2013-01-01

    This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.

  14. COM3/369: Knowledge-based Information Systems: A new approach for the representation and retrieval of medical information

    PubMed Central

    Mann, G; Birkmann, C; Schmidt, T; Schaeffler, V

    1999-01-01

    Introduction Present solutions for the representation and retrieval of medical information from online sources are not very satisfying. Either the retrieval process lacks of precision and completeness the representation does not support the update and maintenance of the represented information. Most efforts are currently put into improving the combination of search engines and HTML based documents. However, due to the current shortcomings of methods for natural language understanding there are clear limitations to this approach. Furthermore, this approach does not solve the maintenance problem. At least medical information exceeding a certain complexity seems to afford approaches that rely on structured knowledge representation and corresponding retrieval mechanisms. Methods Knowledge-based information systems are based on the following fundamental ideas. The representation of information is based on ontologies that define the structure of the domain's concepts and their relations. Views on domain models are defined and represented as retrieval schemata. Retrieval schemata can be interpreted as canonical query types focussing on specific aspects of the provided information (e.g. diagnosis or therapy centred views). Based on these retrieval schemata it can be decided which parts of the information in the domain model must be represented explicitly and formalised to support the retrieval process. As representation language propositional logic is used. All other information can be represented in a structured but informal way using text, images etc. Layout schemata are used to assign layout information to retrieved domain concepts. Depending on the target environment HTML or XML can be used. Results Based on this approach two knowledge-based information systems have been developed. The 'Ophthalmologic Knowledge-based Information System for Diabetic Retinopathy' (OKIS-DR) provides information on diagnoses, findings, examinations, guidelines, and reference images related to diabetic retinopathy. OKIS-DR uses combinations of findings to specify the information that must be retrieved. The second system focuses on nutrition related allergies and intolerances. Information on allergies and intolerances of a patient are used to retrieve general information on the specified combination of allergies and intolerances. As a special feature the system generates tables showing food types and products that are tolerated or not tolerated by patients. Evaluation by external experts and user groups showed that the described approach of knowledge-based information systems increases the precision and completeness of knowledge retrieval. Due to the structured and non-redundant representation of information the maintenance and update of the information can be simplified. Both systems are available as WWW based online knowledge bases and CD-ROMs (cf. http://mta.gsf.de topic: products).

  15. Querying phenotype-genotype relationships on patient datasets using semantic web technology: the example of cerebrotendinous xanthomatosis

    PubMed Central

    2012-01-01

    Background Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Methods Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. Results A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. Conclusions This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms required to query patient datasets at different levels of abstraction. The open world assumption is especially good for describing only partially known phenotype-genotype relationships, in a way that is easily extensible. In future, this type of approach could offer researchers a valuable resource to infer new data from patient data for statistical analysis in translational research. In conclusion, phenotype description formalization and mapping to clinical data are two key elements for interchanging knowledge between basic and clinical research. PMID:22849591

  16. The National Center for Biomedical Ontology

    PubMed Central

    Noy, Natalya F; Shah, Nigam H; Whetzel, Patricia L; Chute, Christopher G; Story, Margaret-Anne; Smith, Barry

    2011-01-01

    The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data. PMID:22081220

  17. Experimental evaluation of ontology-based HIV/AIDS frequently asked question retrieval system.

    PubMed

    Ayalew, Yirsaw; Moeng, Barbara; Mosweunyane, Gontlafetse

    2018-05-01

    This study presents the results of experimental evaluations of an ontology-based frequently asked question retrieval system in the domain of HIV and AIDS. The main purpose of the system is to provide answers to questions on HIV/AIDS using ontology. To evaluate the effectiveness of the frequently asked question retrieval system, we conducted two experiments. The first experiment focused on the evaluation of the quality of the ontology we developed using the OQuaRE evaluation framework which is based on software quality metrics and metrics designed for ontology quality evaluation. The second experiment focused on evaluating the effectiveness of the ontology in retrieving relevant answers. For this we used an open-source information retrieval platform, Terrier, with retrieval models BM25 and PL2. For the measurement of performance, we used the measures mean average precision, mean reciprocal rank, and precision at 5. The results suggest that frequently asked question retrieval with ontology is more effective than frequently asked question retrieval without ontology in the domain of HIV/AIDS.

  18. An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring.

    PubMed

    Alirezaie, Marjan; Kiselev, Andrey; Längkvist, Martin; Klügl, Franziska; Loutfi, Amy

    2017-11-05

    This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment-central Stockholm-in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as "find all regions close to schools and far from the flooded area". The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints.

  19. An Ontology-Based Reasoning Framework for Querying Satellite Images for Disaster Monitoring

    PubMed Central

    Alirezaie, Marjan; Klügl, Franziska; Loutfi, Amy

    2017-01-01

    This paper presents a framework in which satellite images are classified and augmented with additional semantic information to enable queries about what can be found on the map at a particular location, but also about paths that can be taken. This is achieved by a reasoning framework based on qualitative spatial reasoning that is able to find answers to high level queries that may vary on the current situation. This framework called SemCityMap, provides the full pipeline from enriching the raw image data with rudimentary labels to the integration of a knowledge representation and reasoning methods to user interfaces for high level querying. To illustrate the utility of SemCityMap in a disaster scenario, we use an urban environment—central Stockholm—in combination with a flood simulation. We show that the system provides useful answers to high-level queries also with respect to the current flood status. Examples of such queries concern path planning for vehicles or retrieval of safe regions such as “find all regions close to schools and far from the flooded area”. The particular advantage of our approach lies in the fact that ontological information and reasoning is explicitly integrated so that queries can be formulated in a natural way using concepts on appropriate level of abstraction, including additional constraints. PMID:29113073

  20. A UML profile for the OBO relation ontology.

    PubMed

    Guardia, Gabriela D A; Vêncio, Ricardo Z N; de Farias, Cléver R G

    2012-01-01

    Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain.

  1. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  2. Can SNOMED CT be squeezed without losing its shape?

    PubMed

    López-García, Pablo; Schulz, Stefan

    2016-09-21

    In biomedical applications where the size and complexity of SNOMED CT become problematic, using a smaller subset that can act as a reasonable substitute is usually preferred. In a special class of use cases-like ontology-based quality assurance, or when performing scaling experiments for real-time performance-it is essential that modules show a similar shape than SNOMED CT in terms of concept distribution per sub-hierarchy. Exactly how to extract such balanced modules remains unclear, as most previous work on ontology modularization has focused on other problems. In this study, we investigate to what extent extracting balanced modules that preserve the original shape of SNOMED CT is possible, by presenting and evaluating an iterative algorithm. We used a graph-traversal modularization approach based on an input signature. To conform to our definition of a balanced module, we implemented an iterative algorithm that carefully bootstraped and dynamically adjusted the signature at each step. We measured the error for each sub-hierarchy and defined convergence as a residual sum of squares <1. Using 2000 concepts as an initial signature, our algorithm converged after seven iterations and extracted a module 4.7 % the size of SNOMED CT. Seven sub-hierarhies were either over or under-represented within a range of 1-8 %. Our study shows that balanced modules from large terminologies can be extracted using ontology graph-traversal modularization techniques under certain conditions: that the process is repeated a number of times, the input signature is dynamically adjusted in each iteration, and a moderate under/over-representation of some hierarchies is tolerated. In the case of SNOMED CT, our results conclusively show that it can be squeezed to less than 5 % of its size without any sub-hierarchy losing its shape more than 8 %, which is likely sufficient in most use cases.

  3. Affective Primacy vs. Cognitive Primacy: Dissolving the Debate.

    PubMed

    Lai, Vicky Tzuyin; Hagoort, Peter; Casasanto, Daniel

    2012-01-01

    When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill-posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur.

  4. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights

    PubMed Central

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-01

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher’s exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO’s usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher. PMID:26750448

  5. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights.

    PubMed

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-11

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.

  6. Combining the Generic Entity-Attribute-Value Model and Terminological Models into a Common Ontology to Enable Data Integration and Decision Support.

    PubMed

    Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte

    2018-01-01

    The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.

  7. A Social Network System Based on an Ontology in the Korea Institute of Oriental Medicine

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Kyun; Han, Jeong-Min; Song, Mi-Young

    We in this paper propose a social network based on ontology in Korea Institute of Oriental Medicine (KIOM). By using the social network, researchers can find collaborators and share research results with others so that studies in Korean Medicine fields can be activated. For this purpose, first, personal profiles, scholarships, careers, licenses, academic activities, research results, and personal connections for all of researchers in KIOM are collected. After relationship and hierarchy among ontology classes and attributes of classes are defined through analyzing the collected information, a social network ontology are constructed using FOAF and OWL. This ontology can be easily interconnected with other social network by FOAF and provide the reasoning based on OWL ontology. In future, we construct the search and reasoning system using the ontology. Moreover, if the social network is activated, we will open it to whole Korean Medicine fields.

  8. Development of Health Information Search Engine Based on Metadata and Ontology

    PubMed Central

    Song, Tae-Min; Jin, Dal-Lae

    2014-01-01

    Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907

  9. Development of health information search engine based on metadata and ontology.

    PubMed

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  10. What Four Million Mappings Can Tell You about Two Hundred Ontologies

    NASA Astrophysics Data System (ADS)

    Ghazvinian, Amir; Noy, Natalya F.; Jonquet, Clement; Shah, Nigam; Musen, Mark A.

    The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal—an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.

  11. Knowledge Representation and Management: a Linked Data Perspective.

    PubMed

    Barros, M; Couto, F M

    2016-11-10

    Biomedical research is increasingly becoming a data-intensive science in several areas, where prodigious amounts of data is being generated that has to be stored, integrated, shared and analyzed. In an effort to improve the accessibility of data and knowledge, the Linked Data initiative proposed a well-defined set of recommendations for exposing, sharing and integrating data, information and knowledge, using semantic web technologies. The main goal of this paper is to identify the current status and future trends of knowledge representation and management in Life and Health Sciences, mostly with regard to linked data technologies. We selected three prominent linked data studies, namely Bio2RDF, Open PHACTS and EBI RDF platform, and selected 14 studies published after 2014 (inclusive) that cited any of the three studies. We manually analyzed these 14 papers in relation to how they use linked data techniques. The analyses show a tendency to use linked data techniques in Life and Health Sciences, and even if some studies do not follow all of the recommendations, many of them already represent and manage their knowledge using RDF and biomedical ontologies. These insights from RDF and biomedical ontologies are having a strong impact on how knowledge is generated from biomedical data, by making data elements increasingly connected and by providing a better description of their semantics. As health institutes become more data centric, we believe that the adoption of linked data techniques will continue to grow and be an effective solution to knowledge representation and management.

  12. FOCIH: Form-Based Ontology Creation and Information Harvesting

    NASA Astrophysics Data System (ADS)

    Tao, Cui; Embley, David W.; Liddle, Stephen W.

    Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data—which some see as Web 3.0—is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.

  13. A comparison of two methods for retrieving ICD-9-CM data: the effect of using an ontology-based method for handling terminology changes.

    PubMed

    Yu, Alexander C; Cimino, James J

    2011-04-01

    Most existing controlled terminologies can be characterized as collections of terms, wherein the terms are arranged in a simple list or organized in a hierarchy. These kinds of terminologies are considered useful for standardizing terms and encoding data and are currently used in many existing information systems. However, they suffer from a number of limitations that make data reuse difficult. Relatively recently, it has been proposed that formal ontological methods can be applied to some of the problems of terminological design. Biomedical ontologies organize concepts (embodiments of knowledge about biomedical reality) whereas terminologies organize terms (what is used to code patient data at a certain point in time, based on the particular terminology version). However, the application of these methods to existing terminologies is not straightforward. The use of these terminologies is firmly entrenched in many systems, and what might seem to be a simple option of replacing these terminologies is not possible. Moreover, these terminologies evolve over time in order to suit the needs of users. Any methodology must therefore take these constraints into consideration, hence the need for formal methods of managing changes. Along these lines, we have developed a formal representation of the concept-term relation, around which we have also developed a methodology for management of terminology changes. The objective of this study was to determine whether our methodology would result in improved retrieval of data. Comparison of two methods for retrieving data encoded with terms from the International Classification of Diseases (ICD-9-CM), based on their recall when retrieving data for ICD-9-CM terms whose codes had changed but which had retained their original meaning (code change). Recall and interclass correlation coefficient. Statistically significant differences were detected (p<0.05) with the McNemar test for two terms whose codes had changed. Furthermore, when all the cases are combined in an overall category, our method also performs statistically significantly better (p<0.05). Our study shows that an ontology-based ICD-9-CM data retrieval method that takes into account the effects of terminology changes performs better on recall than one that does not in the retrieval of data for terms whose codes had changed but which retained their original meaning. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. A Comparison of Two Methods for Retrieving ICD-9-CM data: The Effect of Using an Ontology-based Method for Handling Terminology Changes

    PubMed Central

    Yu, Alexander C.; Cimino, James J.

    2012-01-01

    Objective Most existing controlled terminologies can be characterized as collections of terms, wherein the terms are arranged in a simple list or organized in a hierarchy. These kinds of terminologies are considered useful for standardizing terms and encoding data and are currently used in many existing information systems. However, they suffer from a number of limitations that make data reuse difficult. Relatively recently, it has been proposed that formal ontological methods can be applied to some of the problems of terminological design. Biomedical ontologies organize concepts (embodiments of knowledge about biomedical reality) whereas terminologies organize terms (what is used to code patient data at a certain point in time, based on the particular terminology version). However, the application of these methods to existing terminologies is not straightforward. The use of these terminologies is firmly entrenched in many systems, and what might seem to be a simple option of replacing these terminologies is not possible. Moreover, these terminologies evolve over time in order to suit the needs of users. Any methodology must therefore take these constraints into consideration, hence the need for formal methods of managing changes. Along these lines, we have developed a formal representation of the concept-term relation, around which we have also developed a methodology for management of terminology changes. The objective of this study was to determine whether our methodology would result in improved retrieval of data. Design Comparison of two methods for retrieving data encoded with terms from the International Classification of Diseases (ICD-9-CM), based on their recall when retrieving data for ICD-9-CM terms whose codes had changed but which had retained their original meaning (code change). Measurements Recall and interclass correlation coefficient. Results Statistically significant differences were detected (p<0.05) with the McNemar test for two terms whose codes had changed. Furthermore, when all the cases are combined in an overall category, our method also performs statistically significantly better (p < 0.05). Conclusion Our study shows that an ontology-based ICD-9-CM data retrieval method that takes into account the effects of terminology changes performs better on recall than one that does not in the retrieval of data for terms whose codes had changed but which retained their original meaning. PMID:21262390

  15. An Ontology of Therapies

    NASA Astrophysics Data System (ADS)

    Eccher, Claudio; Ferro, Antonella; Pisanelli, Domenico M.

    Ontologies are the essential glue to build interoperable systems and the talk of the day in the medical community. In this paper we present the ontology of medical therapies developed in the course of the Oncocure project, aimed at building a guideline based decision support integrated with a legacy Electronic Patient Record (EPR). The therapy ontology is based upon the DOLCE top level ontology. It is our opinion that our ontology, besides constituting a model capturing the precise meaning of therapy-related concepts, can serve for several practical purposes: interfacing automatic support systems with a legacy EPR, allowing the automatic data analysis, and controlling possible medical errors made during EPR data input.

  16. iSMART: Ontology-based Semantic Query of CDA Documents

    PubMed Central

    Liu, Shengping; Ni, Yuan; Mei, Jing; Li, Hanyu; Xie, Guotong; Hu, Gang; Liu, Haifeng; Hou, Xueqiao; Pan, Yue

    2009-01-01

    The Health Level 7 Clinical Document Architecture (CDA) is widely accepted as the format for electronic clinical document. With the rich ontological references in CDA documents, the ontology-based semantic query could be performed to retrieve CDA documents. In this paper, we present iSMART (interactive Semantic MedicAl Record reTrieval), a prototype system designed for ontology-based semantic query of CDA documents. The clinical information in CDA documents will be extracted into RDF triples by a declarative XML to RDF transformer. An ontology reasoner is developed to infer additional information by combining the background knowledge from SNOMED CT ontology. Then an RDF query engine is leveraged to enable the semantic queries. This system has been evaluated using the real clinical documents collected from a large hospital in southern China. PMID:20351883

  17. Creating an ontology driven rules base for an expert system for medical diagnosis.

    PubMed

    Bertaud Gounot, Valérie; Donfack, Valéry; Lasbleiz, Jérémy; Bourde, Annabel; Duvauferrier, Régis

    2011-01-01

    Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a sub-ontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "destrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data.

  18. An ontology based trust verification of software license agreement

    NASA Astrophysics Data System (ADS)

    Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo

    2017-08-01

    When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.

  19. Supporting ontology adaptation and versioning based on a graph of relevance

    NASA Astrophysics Data System (ADS)

    Sassi, Najla; Jaziri, Wassim; Alharbi, Saad

    2016-11-01

    Ontologies recently have become a topic of interest in computer science since they are seen as a semantic support to explicit and enrich data-models as well as to ensure interoperability of data. Moreover, supporting ontology adaptation becomes essential and extremely important, mainly when using ontologies in changing environments. An important issue when dealing with ontology adaptation is the management of several versions. Ontology versioning is a complex and multifaceted problem as it should take into account change management, versions storage and access, consistency issues, etc. The purpose of this paper is to propose an approach and tool for ontology adaptation and versioning. A series of techniques are proposed to 'safely' evolve a given ontology and produce a new consistent version. The ontology versions are ordered in a graph according to their relevance. The relevance is computed based on four criteria: conceptualisation, usage frequency, abstraction and completeness. The techniques to carry out the versioning process are implemented in the Consistology tool, which has been developed to assist users in expressing adaptation requirements and managing ontology versions.

  20. Ontological interpretation of biomedical database content.

    PubMed

    Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan

    2017-06-26

    Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is addressed by a method that identifies implicit knowledge behind semantic annotations in biological databases and grounds it in an expressive upper-level ontology. The result is a seamless representation of database structure, content and annotations as OWL models.

  1. Research on presentation and query service of geo-spatial data based on ontology

    NASA Astrophysics Data System (ADS)

    Li, Hong-wei; Li, Qin-chao; Cai, Chang

    2008-10-01

    The paper analyzed the deficiency on presentation and query of geo-spatial data existed in current GIS, discussed the advantages that ontology possessed in formalization of geo-spatial data and the presentation of semantic granularity, taken land-use classification system as an example to construct domain ontology, and described it by OWL; realized the grade level and category presentation of land-use data benefited from the thoughts of vertical and horizontal navigation; and then discussed query mode of geo-spatial data based on ontology, including data query based on types and grade levels, instances and spatial relation, and synthetic query based on types and instances; these methods enriched query mode of current GIS, and is a useful attempt; point out that the key point of the presentation and query of spatial data based on ontology is to construct domain ontology that can correctly reflect geo-concept and its spatial relation and realize its fine formalization description.

  2. An Ontology-Based Tourism Recommender System Based on Spreading Activation Model

    NASA Astrophysics Data System (ADS)

    Bahramian, Z.; Abbaspour, R. Ali

    2015-12-01

    A tourist has time and budget limitations; hence, he needs to select points of interest (POIs) optimally. Since the available information about POIs is overloading, it is difficult for a tourist to select the most appreciate ones considering preferences. In this paper, a new travel recommender system is proposed to overcome information overload problem. A recommender system (RS) evaluates the overwhelming number of POIs and provides personalized recommendations to users based on their preferences. A content-based recommendation system is proposed, which uses the information about the user's preferences and POIs and calculates a degree of similarity between them. It selects POIs, which have highest similarity with the user's preferences. The proposed content-based recommender system is enhanced using the ontological information about tourism domain to represent both the user profile and the recommendable POIs. The proposed ontology-based recommendation process is performed in three steps including: ontology-based content analyzer, ontology-based profile learner, and ontology-based filtering component. User's feedback adapts the user's preferences using Spreading Activation (SA) strategy. It shows the proposed recommender system is effective and improves the overall performance of the traditional content-based recommender systems.

  3. Multi-label literature classification based on the Gene Ontology graph.

    PubMed

    Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua

    2008-12-08

    The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.

  4. The OceanLink Project

    NASA Astrophysics Data System (ADS)

    Narock, T.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Finin, T.; Hitzler, P.; Krisnadhi, A.; Raymond, L. M.; Shepherd, A.; Wiebe, P. H.

    2014-12-01

    A wide spectrum of maturing methods and tools, collectively characterized as the Semantic Web, is helping to vastly improve the dissemination of scientific research. Creating semantic integration requires input from both domain and cyberinfrastructure scientists. OceanLink, an NSF EarthCube Building Block, is demonstrating semantic technologies through the integration of geoscience data repositories, library holdings, conference abstracts, and funded research awards. Meeting project objectives involves applying semantic technologies to support data representation, discovery, sharing and integration. Our semantic cyberinfrastructure components include ontology design patterns, Linked Data collections, semantic provenance, and associated services to enhance data and knowledge discovery, interoperation, and integration. We discuss how these components are integrated, the continued automated and semi-automated creation of semantic metadata, and techniques we have developed to integrate ontologies, link resources, and preserve provenance and attribution.

  5. Towards Agile Ontology Maintenance

    NASA Astrophysics Data System (ADS)

    Luczak-Rösch, Markus

    Ontologies are an appropriate means to represent knowledge on the Web. Research on ontology engineering reached practices for an integrative lifecycle support. However, a broader success of ontologies in Web-based information systems remains unreached while the more lightweight semantic approaches are rather successful. We assume, paired with the emerging trend of services and microservices on the Web, new dynamic scenarios gain momentum in which a shared knowledge base is made available to several dynamically changing services with disparate requirements. Our work envisions a step towards such a dynamic scenario in which an ontology adapts to the requirements of the accessing services and applications as well as the user's needs in an agile way and reduces the experts' involvement in ontology maintenance processes.

  6. Ontology-based reusable clinical document template production system.

    PubMed

    Nam, Sejin; Lee, Sungin; Kim, James G Boram; Kim, Hong-Gee

    2012-01-01

    Clinical documents embody professional clinical knowledge. This paper shows an effective clinical document template (CDT) production system that uses a clinical description entity (CDE) model, a CDE ontology, and a knowledge management system called STEP that manages ontology-based clinical description entities. The ontology represents CDEs and their inter-relations, and the STEP system stores and manages CDE ontology-based information regarding CDTs. The system also provides Web Services interfaces for search and reasoning over clinical entities. The system was populated with entities and relations extracted from 35 CDTs that were used in admission, discharge, and progress reports, as well as those used in nursing and operation functions. A clinical document template editor is shown that uses STEP.

  7. The flora phenotype ontology (FLOPO): tool for integrating morphological traits and phenotypes of vascular plants.

    PubMed

    Hoehndorf, Robert; Alshahrani, Mona; Gkoutos, Georgios V; Gosline, George; Groom, Quentin; Hamann, Thomas; Kattge, Jens; de Oliveira, Sylvia Mota; Schmidt, Marco; Sierra, Soraya; Smets, Erik; Vos, Rutger A; Weiland, Claus

    2016-11-14

    The systematic analysis of a large number of comparable plant trait data can support investigations into phylogenetics and ecological adaptation, with broad applications in evolutionary biology, agriculture, conservation, and the functioning of ecosystems. Floras, i.e., books collecting the information on all known plant species found within a region, are a potentially rich source of such plant trait data. Floras describe plant traits with a focus on morphology and other traits relevant for species identification in addition to other characteristics of plant species, such as ecological affinities, distribution, economic value, health applications, traditional uses, and so on. However, a key limitation in systematically analyzing information in Floras is the lack of a standardized vocabulary for the described traits as well as the difficulties in extracting structured information from free text. We have developed the Flora Phenotype Ontology (FLOPO), an ontology for describing traits of plant species found in Floras. We used the Plant Ontology (PO) and the Phenotype And Trait Ontology (PATO) to extract entity-quality relationships from digitized taxon descriptions in Floras, and used a formal ontological approach based on phenotype description patterns and automated reasoning to generate the FLOPO. The resulting ontology consists of 25,407 classes and is based on the PO and PATO. The classified ontology closely follows the structure of Plant Ontology in that the primary axis of classification is the observed plant anatomical structure, and more specific traits are then classified based on parthood and subclass relations between anatomical structures as well as subclass relations between phenotypic qualities. The FLOPO is primarily intended as a framework based on which plant traits can be integrated computationally across all species and higher taxa of flowering plants. Importantly, it is not intended to replace established vocabularies or ontologies, but rather serve as an overarching framework based on which different application- and domain-specific ontologies, thesauri and vocabularies of phenotypes observed in flowering plants can be integrated.

  8. Testing for ontological errors in probabilistic forecasting models of natural systems

    PubMed Central

    Marzocchi, Warner; Jordan, Thomas H.

    2014-01-01

    Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not. PMID:25097265

  9. The Representation of Heart Development in the Gene Ontology

    PubMed Central

    Khodiyar, Varsha K.; Hill, David P.; Howe, Doug; Berardini, Tanya Z.; Tweedie, Susan; Talmud, Philippa J.; Breckenridge, Ross; Bhattarcharya, Shoumo; Riley, Paul; Scambler, Peter; Lovering, Ruth C.

    2012-01-01

    An understanding of heart development is critical in any systems biology approach to cardiovascular disease. The interpretation of data generated from high-throughput technologies (such as microarray and proteomics) is also essential to this approach. However, characterizing the role of genes in the processes underlying heart development and cardiovascular disease involves the non-trivial task of data analysis and integration of previous knowledge. The Gene Ontology (GO) Consortium provides structured controlled biological vocabularies that are used to summarize previous functional knowledge for gene products across all species. One aspect of GO describes biological processes, such as development and signaling. In order to support high-throughput cardiovascular research, we have initiated an effort to fully describe heart development in GO; expanding the number of GO terms describing heart development from 12 to over 280. This new ontology describes heart morphogenesis, the differentiation of specific cardiac cell types, and the involvement of signaling pathways in heart development and aligns GO with the current views of the heart development research community and its representation in the literature. This extension of GO allows gene product annotators to comprehensively capture the genetic program leading to the developmental progression of the heart. This will enable users to integrate heart development data across species, resulting in the comprehensive retrieval of information about this subject. The revised GO structure, combined with gene product annotations, should improve the interpretation of data from high-throughput methods in a variety of cardiovascular research areas, including heart development, congenital cardiac disease, and cardiac stem cell research. Additionally, we invite the heart development community to contribute to the expansion of this important dataset for the benefit of future research in this area. PMID:21419760

  10. EXACT2: the semantics of biomedical protocols

    PubMed Central

    2014-01-01

    Background The reliability and reproducibility of experimental procedures is a cornerstone of scientific practice. There is a pressing technological need for the better representation of biomedical protocols to enable other agents (human or machine) to better reproduce results. A framework that ensures that all information required for the replication of experimental protocols is essential to achieve reproducibility. Methods We have developed the ontology EXACT2 (EXperimental ACTions) that is designed to capture the full semantics of biomedical protocols required for their reproducibility. To construct EXACT2 we manually inspected hundreds of published and commercial biomedical protocols from several areas of biomedicine. After establishing a clear pattern for extracting the required information we utilized text-mining tools to translate the protocols into a machine amenable format. We have verified the utility of EXACT2 through the successful processing of previously 'unseen' (not used for the construction of EXACT2) protocols. Results The paper reports on a fundamentally new version EXACT2 that supports the semantically-defined representation of biomedical protocols. The ability of EXACT2 to capture the semantics of biomedical procedures was verified through a text mining use case. In this EXACT2 is used as a reference model for text mining tools to identify terms pertinent to experimental actions, and their properties, in biomedical protocols expressed in natural language. An EXACT2-based framework for the translation of biomedical protocols to a machine amenable format is proposed. Conclusions The EXACT2 ontology is sufficient to record, in a machine processable form, the essential information about biomedical protocols. EXACT2 defines explicit semantics of experimental actions, and can be used by various computer applications. It can serve as a reference model for for the translation of biomedical protocols in natural language into a semantically-defined format. PMID:25472549

  11. [Artificial intelligence meeting neuropsychology. Semantic memory in normal and pathological aging].

    PubMed

    Aimé, Xavier; Charlet, Jean; Maillet, Didier; Belin, Catherine

    2015-03-01

    Artificial intelligence (IA) is the subject of much research, but also many fantasies. It aims to reproduce human intelligence in its learning capacity, knowledge storage and computation. In 2014, the Defense Advanced Research Projects Agency (DARPA) started the restoring active memory (RAM) program that attempt to develop implantable technology to bridge gaps in the injured brain and restore normal memory function to people with memory loss caused by injury or disease. In another IA's field, computational ontologies (a formal and shared conceptualization) try to model knowledge in order to represent a structured and unambiguous meaning of the concepts of a target domain. The aim of these structures is to ensure a consensual understanding of their meaning and a univariant use (the same concept is used by all to categorize the same individuals). The first representations of knowledge in the AI's domain are largely based on model tests of semantic memory. This one, as a component of long-term memory is the memory of words, ideas, concepts. It is the only declarative memory system that resists so remarkably to the effects of age. In contrast, non-specific cognitive changes may decrease the performance of elderly in various events and instead report difficulties of access to semantic representations that affect the semantics stock itself. Some dementias, like semantic dementia and Alzheimer's disease, are linked to alteration of semantic memory. We propose in this paper, using the computational ontologies model, a formal and relatively thin modeling, in the service of neuropsychology: 1) for the practitioner with decision support systems, 2) for the patient as cognitive prosthesis outsourced, and 3) for the researcher to study semantic memory.

  12. OntoFox: web-based support for ontology reuse

    PubMed Central

    2010-01-01

    Background Ontology development is a rapidly growing area of research, especially in the life sciences domain. To promote collaboration and interoperability between different projects, the OBO Foundry principles require that these ontologies be open and non-redundant, avoiding duplication of terms through the re-use of existing resources. As current options to do so present various difficulties, a new approach, MIREOT, allows specifying import of single terms. Initial implementations allow for controlled import of selected annotations and certain classes of related terms. Findings OntoFox http://ontofox.hegroup.org/ is a web-based system that allows users to input terms, fetch selected properties, annotations, and certain classes of related terms from the source ontologies and save the results using the RDF/XML serialization of the Web Ontology Language (OWL). Compared to an initial implementation of MIREOT, OntoFox allows additional and more easily configurable options for selecting and rewriting annotation properties, and for inclusion of all or a computed subset of terms between low and top level terms. Additional methods for including related classes include a SPARQL-based ontology term retrieval algorithm that extracts terms related to a given set of signature terms and an option to extract the hierarchy rooted at a specified ontology term. OntoFox's output can be directly imported into a developer's ontology. OntoFox currently supports term retrieval from a selection of 15 ontologies accessible via SPARQL endpoints and allows users to extend this by specifying additional endpoints. An OntoFox application in the development of the Vaccine Ontology (VO) is demonstrated. Conclusions OntoFox provides a timely publicly available service, providing different options for users to collect terms from external ontologies, making them available for reuse by import into client OWL ontologies. PMID:20569493

  13. Toxicology ontology perspectives.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

  14. Expert2OWL: A Methodology for Pattern-Based Ontology Development.

    PubMed

    Tahar, Kais; Xu, Jie; Herre, Heinrich

    2017-01-01

    The formalization of expert knowledge enables a broad spectrum of applications employing ontologies as underlying technology. These include eLearning, Semantic Web and expert systems. However, the manual construction of such ontologies is time-consuming and thus expensive. Moreover, experts are often unfamiliar with the syntax and semantics of formal ontology languages such as OWL and usually have no experience in developing formal ontologies. To overcome these barriers, we developed a new method and tool, called Expert2OWL that provides efficient features to support the construction of OWL ontologies using GFO (General Formal Ontology) as a top-level ontology. This method allows a close and effective collaboration between ontologists and domain experts. Essentially, this tool integrates Excel spreadsheets as part of a pattern-based ontology development and refinement process. Expert2OWL enables us to expedite the development process and modularize the resulting ontologies. We applied this method in the field of Chinese Herbal Medicine (CHM) and used Expert2OWL to automatically generate an accurate Chinese Herbology ontology (CHO). The expressivity of CHO was tested and evaluated using ontology query languages SPARQL and DL. CHO shows promising results and can generate answers to important scientific questions such as which Chinese herbal formulas contain which substances, which substances treat which diseases, and which ones are the most frequently used in CHM.

  15. Using ontology databases for scalable query answering, inconsistency detection, and data integration

    PubMed Central

    Dou, Dejing

    2011-01-01

    An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378

  16. Organizational Knowledge Transfer Using Ontologies and a Rule-Based System

    NASA Astrophysics Data System (ADS)

    Okabe, Masao; Yoshioka, Akiko; Kobayashi, Keido; Yamaguchi, Takahira

    In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

  17. A Scientific Software Product Line for the Bioinformatics domain.

    PubMed

    Costa, Gabriella Castro B; Braga, Regina; David, José Maria N; Campos, Fernanda

    2015-08-01

    Most specialized users (scientists) that use bioinformatics applications do not have suitable training on software development. Software Product Line (SPL) employs the concept of reuse considering that it is defined as a set of systems that are developed from a common set of base artifacts. In some contexts, such as in bioinformatics applications, it is advantageous to develop a collection of related software products, using SPL approach. If software products are similar enough, there is the possibility of predicting their commonalities, differences and then reuse these common features to support the development of new applications in the bioinformatics area. This paper presents the PL-Science approach which considers the context of SPL and ontology in order to assist scientists to define a scientific experiment, and to specify a workflow that encompasses bioinformatics applications of a given experiment. This paper also focuses on the use of ontologies to enable the use of Software Product Line in biological domains. In the context of this paper, Scientific Software Product Line (SSPL) differs from the Software Product Line due to the fact that SSPL uses an abstract scientific workflow model. This workflow is defined according to a scientific domain and using this abstract workflow model the products (scientific applications/algorithms) are instantiated. Through the use of ontology as a knowledge representation model, we can provide domain restrictions as well as add semantic aspects in order to facilitate the selection and organization of bioinformatics workflows in a Scientific Software Product Line. The use of ontologies enables not only the expression of formal restrictions but also the inferences on these restrictions, considering that a scientific domain needs a formal specification. This paper presents the development of the PL-Science approach, encompassing a methodology and an infrastructure, and also presents an approach evaluation. This evaluation presents case studies in bioinformatics, which were conducted in two renowned research institutions in Brazil. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  19. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will facilitate the integrative capabilities of scientists as we examine the relationships between data and external factors such as processes that may influence our understanding of "why" certain events happen. We emphasize the need to go from analysis of data to concepts related to scientific principles of thermodynamics, kinetics, heat flow, mass transfer, etc. Towards meeting these objectives, we report on a pair of related service engines: DIA (Discovery, integration and analysis), and SEDRE (Semantically-Enabled Data Registration Engine) that utilize ontologies for semantic interoperability and integration.

  20. The Human EST Ontology Explorer: a tissue-oriented visualization system for ontologies distribution in human EST collections.

    PubMed

    Merelli, Ivan; Caprera, Andrea; Stella, Alessandra; Del Corvo, Marcello; Milanesi, Luciano; Lazzari, Barbara

    2009-10-15

    The NCBI dbEST currently contains more than eight million human Expressed Sequenced Tags (ESTs). This wide collection represents an important source of information for gene expression studies, provided it can be inspected according to biologically relevant criteria. EST data can be browsed using different dedicated web resources, which allow to investigate library specific gene expression levels and to make comparisons among libraries, highlighting significant differences in gene expression. Nonetheless, no tool is available to examine distributions of quantitative EST collections in Gene Ontology (GO) categories, nor to retrieve information concerning library-dependent EST involvement in metabolic pathways. In this work we present the Human EST Ontology Explorer (HEOE) http://www.itb.cnr.it/ptp/human_est_explorer, a web facility for comparison of expression levels among libraries from several healthy and diseased tissues. The HEOE provides library-dependent statistics on the distribution of sequences in the GO Direct Acyclic Graph (DAG) that can be browsed at each GO hierarchical level. The tool is based on large-scale BLAST annotation of EST sequences. Due to the huge number of input sequences, this BLAST analysis was performed with the aid of grid computing technology, which is particularly suitable to address data parallel task. Relying on the achieved annotation, library-specific distributions of ESTs in the GO Graph were inferred. A pathway-based search interface was also implemented, for a quick evaluation of the representation of libraries in metabolic pathways. EST processing steps were integrated in a semi-automatic procedure that relies on Perl scripts and stores results in a MySQL database. A PHP-based web interface offers the possibility to simultaneously visualize, retrieve and compare data from the different libraries. Statistically significant differences in GO categories among user selected libraries can also be computed. The HEOE provides an alternative and complementary way to inspect EST expression levels with respect to approaches currently offered by other resources. Furthermore, BLAST computation on the whole human EST dataset was a suitable test of grid scalability in the context of large-scale bioinformatics analysis. The HEOE currently comprises sequence analysis from 70 non-normalized libraries, representing a comprehensive overview on healthy and unhealthy tissues. As the analysis procedure can be easily applied to other libraries, the number of represented tissues is intended to increase.

  1. TermGenie – a web-application for pattern-based ontology class generation

    DOE PAGES

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.; ...

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  2. TermGenie – a web-application for pattern-based ontology class generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  3. TermGenie - a web-application for pattern-based ontology class generation.

    PubMed

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  4. From frames to OWL2: Converting the Foundational Model of Anatomy.

    PubMed

    Detwiler, Landon T; Mejino, Jose L V; Brinkley, James F

    2016-05-01

    The Foundational Model of Anatomy (FMA) [Rosse C, Mejino JLV. A reference ontology for bioinformatics: the Foundational Model of Anatomy. J. Biomed. Inform. 2003;36:478-500] is an ontology that represents canonical anatomy at levels ranging from the entire body to biological macromolecules, and has rapidly become the primary reference ontology for human anatomy, and a template for model organisms. Prior to this work, the FMA was developed in a knowledge modeling language known as Protégé Frames. Frames is an intuitive representational language, but is no longer the industry standard. Recognizing the need for an official version of the FMA in the more modern semantic web language OWL2 (hereafter referred to as OWL), the objective of this work was to create a generalizable Frames-to-OWL conversion tool, to use the tool to convert the FMA to OWL, to "clean up" the converted FMA so that it classifies under an EL reasoner, and then to do all further development in OWL. The conversion tool is a Java application that uses the Protégé knowledge representation API for interacting with the initial Frames ontology, and uses the OWL-API for producing new statements (axioms, etc.) in OWL. The converter is relation centric. The conversion is configurable, on a property-by-property basis, via user-specifiable XML configuration files. The best conversion, for each property, was determined in conjunction with the FMA knowledge author. The convertor is potentially generalizable, which we partially demonstrate by using it to convert our Ontology of Craniofacial Development and Malformation as well as the FMA. Post-conversion cleanup involved using the Explain feature of Protégé to trace classification errors under the ELK reasoner in Protégé, fixing the errors, then re-running the reasoner. We are currently doing all our development in the converted and cleaned-up version of the FMA. The FMA (updated every 3 months) is available via our FMA web page http://si.washington.edu/projects/fma, which also provides access to mailing lists, an issue tracker, a SPARQL endpoint (updated every week), and an online browser. The converted OCDM is available at http://www.si.washington.edu/projects/ocdm. The conversion code is open source, and available at http://purl.org/sig/software/frames2owl. Prior to the post-conversion cleanup 73% of the more than 100,000 classes were unsatisfiable. After correction of six types of errors no classes remained unsatisfiable. Because our FMA conversion captures all or most of the information in the Frames version, is the only complete OWL version that classifies under an EL reasoner, and is maintained by the FMA authors themselves, we propose that this version should be the only official release version of the FMA in OWL, supplanting all other versions. Although several issues remain to be resolved post-conversion, release of a single, standardized version of the FMA in OWL will greatly facilitate its use in informatics research and in the development of a global knowledge base within the semantic web. Because of the fundamental nature of anatomy in both understanding and organizing biomedical information, and because of the importance of the FMA in particular in representing human anatomy, the FMA in OWL should greatly accelerate the development of an anatomically based structural information framework for organizing and linking a large amount of biomedical information. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.

    PubMed

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin

    2007-09-13

    Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.

  6. Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.

    PubMed

    Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-01-01

    Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.

  7. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its content, enabling its integration in semantically enriched applications. [1] Noy, N.F., Shah, N.H., et al., BioPortal: ontologies and integrated data resources at the click of a mouse. Nucleic Acids Res, 2009. 37(Web Server issue): p. W170-3.

  8. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    PubMed Central

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506

  9. An ontology-driven tool for structured data acquisition using Web forms.

    PubMed

    Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A

    2017-08-01

    Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.

  10. Knowledge Representation and Management: A Linked Data Perspective

    PubMed Central

    Barros, M.

    2016-01-01

    Summary Introduction Biomedical research is increasingly becoming a data-intensive science in several areas, where prodigious amounts of data is being generated that has to be stored, integrated, shared and analyzed. In an effort to improve the accessibility of data and knowledge, the Linked Data initiative proposed a well-defined set of recommendations for exposing, sharing and integrating data, information and knowledge, using semantic web technologies. Objective The main goal of this paper is to identify the current status and future trends of knowledge representation and management in Life and Health Sciences, mostly with regard to linked data technologies. Methods We selected three prominent linked data studies, namely Bio2RDF, Open PHACTS and EBI RDF platform, and selected 14 studies published after 2014 (inclusive) that cited any of the three studies. We manually analyzed these 14 papers in relation to how they use linked data techniques. Results The analyses show a tendency to use linked data techniques in Life and Health Sciences, and even if some studies do not follow all of the recommendations, many of them already represent and manage their knowledge using RDF and biomedical ontologies. Conclusion These insights from RDF and biomedical ontologies are having a strong impact on how knowledge is generated from biomedical data, by making data elements increasingly connected and by providing a better description of their semantics. As health institutes become more data centric, we believe that the adoption of linked data techniques will continue to grow and be an effective solution to knowledge representation and management. PMID:27830248

  11. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  12. An Approach to Folksonomy-Based Ontology Maintenance for Learning Environments

    ERIC Educational Resources Information Center

    Gasevic, D.; Zouaq, Amal; Torniai, Carlo; Jovanovic, J.; Hatala, Marek

    2011-01-01

    Recent research in learning technologies has demonstrated many promising contributions from the use of ontologies and semantic web technologies for the development of advanced learning environments. In spite of those benefits, ontology development and maintenance remain the key research challenges to be solved before ontology-enhanced learning…

  13. A framework for the computer-aided planning and optimisation of manufacturing processes for components with functional graded properties

    NASA Astrophysics Data System (ADS)

    Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.

    2014-05-01

    In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.

  14. Formal ontology for natural language processing and the integration of biomedical databases.

    PubMed

    Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry

    2006-01-01

    The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].

  15. Integrating concept ontology and multitask learning to achieve more effective classifier training for multilevel image annotation.

    PubMed

    Fan, Jianping; Gao, Yuli; Luo, Hangzai

    2008-03-01

    In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.

  16. Semantic Data Access Services at NASA's Atmospheric Science Data Center

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Hertz, J.; Kusterer, J.

    2012-12-01

    The corpus of Earth Science data products at the Atmospheric Science Data Center at NASA's Langley Research Center comprises a widely heterogeneous set of products, even among those whose subject matter is very similar. Two distinct data products may both contain data on the same parameter, for instance, solar irradiance; but the instruments used, and the circumstances under which the data were collected and processed, may differ significantly. Understanding the differences is critical to using the data effectively. Data distribution services must be able to provide prospective users with enough information to allow them to meaningfully compare and evaluate the data products offered. Semantic technologies - ontologies, triple stores, reasoners, linked data - offer functionality for addressing this issue. Ontologies can provide robust, high-fidelity domain models that serve as common schema for discovering, evaluating, comparing and integrating data from disparate products. Reasoning engines and triple stores can leverage ontologies to support intelligent search applications that allow users to discover, query, retrieve, and easily reformat data from a broad spectrum of sources. We argue that because of the extremely complex nature of scientific data, data distribution systems should wholeheartedly embrace semantic technologies in order to make their data accessible to a broad array of prospective end users, and to ensure that the data they provide will be clearly understood and used appropriately by consumers. Toward this end, we propose a distribution system in which formal ontological models that accurately and comprehensively represent the ASDC's data domain, and fully leverage the expressivity and inferential capabilities of first order logic, are used to generate graph-based representations of the relevant relationships among data sets, observational systems, metadata files, and geospatial, temporal and scientific parameters to help prospective data consumers navigate directly to relevant data sets and query, subset, retrieve and compare the measurement and calculation data they contain. A critical part of developing semantically-enabled data distribution capabilities is developing an ontology that adequately describes 1) the data products - their structure, their content, and any supporting documentation; 2) the data domain - the objects and processes that the products denote; and 3) the relationship between the data and the domain. The ontology, in addition, should be machine readable and capable of integrating with the larger data distribution system to provide an interactive user experience. We will demonstrate how a formal, high-fidelity, queriable ontology representing the atmospheric science domain objects and data products, together with a robust set of inference rules for generating interactive graphs, allows researchers to navigate quickly and painlessly through the large volume of data at the ASDC. Scientists will be able to discover data products that exactly meet their particular criteria, link to information about the instruments and processing methods that generated the data; and compare and contrast related products.

  17. Using Data Crawlers and Semantic Web to Build Financial XBRL Data Generators: The SONAR Extension Approach

    PubMed Central

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach. PMID:24587726

  18. Multidisciplinary Modelling of Symptoms and Signs with Archetypes and SNOMED-CT for Clinical Decision Support.

    PubMed

    Marco-Ruiz, Luis; Maldonado, J Alberto; Karlsen, Randi; Bellika, Johan G

    2015-01-01

    Clinical Decision Support Systems (CDSS) help to improve health care and reduce costs. However, the lack of knowledge management and modelling hampers their maintenance and reuse. Current EHR standards and terminologies can allow the semantic representation of the data and knowledge of CDSS systems boosting their interoperability, reuse and maintenance. This paper presents the modelling process of respiratory conditions' symptoms and signs by a multidisciplinary team of clinicians and information architects with the help of openEHR, SNOMED and clinical information modelling tools for a CDSS. The information model of the CDSS was defined by means of an archetype and the knowledge model was implemented by means of an SNOMED-CT based ontology.

  19. Knowledge network model of the energy consumption in discrete manufacturing system

    NASA Astrophysics Data System (ADS)

    Xu, Binzi; Wang, Yan; Ji, Zhicheng

    2017-07-01

    Discrete manufacturing system generates a large amount of data and information because of the development of information technology. Hence, a management mechanism is urgently required. In order to incorporate knowledge generated from manufacturing data and production experience, a knowledge network model of the energy consumption in the discrete manufacturing system was put forward based on knowledge network theory and multi-granularity modular ontology technology. This model could provide a standard representation for concepts, terms and their relationships, which could be understood by both human and computer. Besides, the formal description of energy consumption knowledge elements (ECKEs) in the knowledge network was also given. Finally, an application example was used to verify the feasibility of the proposed method.

  20. Using data crawlers and semantic Web to build financial XBRL data generators: the SONAR extension approach.

    PubMed

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Colomo-Palacios, Ricardo; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel; García-Sánchez, Francisco

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach.

Top