Science.gov

Sample records for ontology lookup service

  1. The Ontology Lookup Service, a lightweight cross-platform tool for controlled vocabulary queries.

    PubMed

    Côté, Richard G; Jones, Philip; Apweiler, Rolf; Hermjakob, Henning

    2006-02-28

    With the vast amounts of biomedical data being generated by high-throughput analysis methods, controlled vocabularies and ontologies are becoming increasingly important to annotate units of information for ease of search and retrieval. Each scientific community tends to create its own locally available ontology. The interfaces to query these ontologies tend to vary from group to group. We saw the need for a centralized location to perform controlled vocabulary queries that would offer both a lightweight web-accessible user interface as well as a consistent, unified SOAP interface for automated queries. The Ontology Lookup Service (OLS) was created to integrate publicly available biomedical ontologies into a single database. All modified ontologies are updated daily. A list of currently loaded ontologies is available online. The database can be queried to obtain information on a single term or to browse a complete ontology using AJAX. Auto-completion provides a user-friendly search mechanism. An AJAX-based ontology viewer is available to browse a complete ontology or subsets of it. A programmatic interface is available to query the webservice using SOAP. The service is described by a WSDL descriptor file available online. A sample Java client to connect to the webservice using SOAP is available for download from SourceForge. All OLS source code is publicly available under the open source Apache Licence. The OLS provides a user-friendly single entry point for publicly available ontologies in the Open Biomedical Ontology (OBO) format. It can be accessed interactively or programmatically at http://www.ebi.ac.uk/ontology-lookup/.

  2. The Ontology Lookup Service: more data and better tools for controlled vocabulary queries.

    PubMed

    Côté, Richard G; Jones, Philip; Martens, Lennart; Apweiler, Rolf; Hermjakob, Henning

    2008-07-01

    The Ontology Lookup Service (OLS) (http://www.ebi.ac.uk/ols) provides interactive and programmatic interfaces to query, browse and navigate an ever increasing number of biomedical ontologies and controlled vocabularies. The volume of data available for querying has more than quadrupled since it went into production and OLS functionality has been integrated into several high-usage databases and data entry tools. Improvements have been made to both OLS query interfaces, based on user feedback and requirements, to improve usability and service interoperability and provide novel ways to perform queries.

  3. Utilization of ontology look-up services in information retrieval for biomedical literature.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Lovis, Christian; Ruch, Patrick

    2013-01-01

    With the vast amount of biomedical data we face the necessity to improve information retrieval processes in biomedical domain. The use of biomedical ontologies facilitated the combination of various data sources (e.g. scientific literature, clinical data repository) by increasing the quality of information retrieval and reducing the maintenance efforts. In this context, we developed Ontology Look-up services (OLS), based on NEWT and MeSH vocabularies. Our services were involved in some information retrieval tasks such as gene/disease normalization. The implementation of OLS services significantly accelerated the extraction of particular biomedical facts by structuring and enriching the data context. The results of precision in normalization tasks were boosted on about 20%.

  4. Simple Lookup Service

    SciTech Connect

    2013-05-01

    Simple Lookup Service (sLS) is a REST/JSON based lookup service that allows users to publish information in the form of key-value pairs and search for the published information. The lookup service supports both pull and push model. This software can be used to create a distributed architecture/cloud.

  5. Building a biomedical ontology recommender web service

    PubMed Central

    2010-01-01

    Background Researchers in biomedical informatics use ontologies and terminologies to annotate their data in order to facilitate data integration and translational discoveries. As the use of ontologies for annotation of biomedical datasets has risen, a common challenge is to identify ontologies that are best suited to annotating specific datasets. The number and variety of biomedical ontologies is large, and it is cumbersome for a researcher to figure out which ontology to use. Methods We present the Biomedical Ontology Recommender web service. The system uses textual metadata or a set of keywords describing a domain of interest and suggests appropriate ontologies for annotating or representing the data. The service makes a decision based on three criteria. The first one is coverage, or the ontologies that provide most terms covering the input text. The second is connectivity, or the ontologies that are most often mapped to by other ontologies. The final criterion is size, or the number of concepts in the ontologies. The service scores the ontologies as a function of scores of the annotations created using the National Center for Biomedical Ontology (NCBO) Annotator web service. We used all the ontologies from the UMLS Metathesaurus and the NCBO BioPortal. Results We compare and contrast our Recommender by an exhaustive functional comparison to previously published efforts. We evaluate and discuss the results of several recommendation heuristics in the context of three real world use cases. The best recommendations heuristics, rated ‘very relevant’ by expert evaluators, are the ones based on coverage and connectivity criteria. The Recommender service (alpha version) is available to the community and is embedded into BioPortal. PMID:20626921

  6. Building a biomedical ontology recommender web service.

    PubMed

    Jonquet, Clement; Musen, Mark A; Shah, Nigam H

    2010-06-22

    Researchers in biomedical informatics use ontologies and terminologies to annotate their data in order to facilitate data integration and translational discoveries. As the use of ontologies for annotation of biomedical datasets has risen, a common challenge is to identify ontologies that are best suited to annotating specific datasets. The number and variety of biomedical ontologies is large, and it is cumbersome for a researcher to figure out which ontology to use. We present the Biomedical Ontology Recommender web service. The system uses textual metadata or a set of keywords describing a domain of interest and suggests appropriate ontologies for annotating or representing the data. The service makes a decision based on three criteria. The first one is coverage, or the ontologies that provide most terms covering the input text. The second is connectivity, or the ontologies that are most often mapped to by other ontologies. The final criterion is size, or the number of concepts in the ontologies. The service scores the ontologies as a function of scores of the annotations created using the National Center for Biomedical Ontology (NCBO) Annotator web service. We used all the ontologies from the UMLS Metathesaurus and the NCBO BioPortal. We compare and contrast our Recommender by an exhaustive functional comparison to previously published efforts. We evaluate and discuss the results of several recommendation heuristics in the context of three real world use cases. The best recommendations heuristics, rated 'very relevant' by expert evaluators, are the ones based on coverage and connectivity criteria. The Recommender service (alpha version) is available to the community and is embedded into BioPortal.

  7. The ontology-based answers (OBA) service: a connector for embedded usage of ontologies in applications

    PubMed Central

    Dönitz, Jürgen; Wingender, Edgar

    2012-01-01

    The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and “partOf” relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba PMID

  8. Research on e-learning services based on ontology theory

    NASA Astrophysics Data System (ADS)

    Liu, Rui

    2013-07-01

    E-learning services can realize network learning resource sharing and interoperability, but they can't realize automatic discovery, implementation and integration of services. This paper proposes a framework of e-learning services based on ontology, the ontology technology is applied to the publication and discovery process of e-learning services, in order to realize accurate and efficient retrieval and utilization of e-learning services.

  9. Ontology-based interoperability service for HL7 interfaces implementation.

    PubMed

    González, Carolina; Blobel, Bernd; López, Diego M

    2010-01-01

    Sharing information and knowledge among heterogeneous health information systems requires semantic interoperability. Most integration projects address semantic interoperability by implementing HL7 version 3 standard interfaces. However, it is challenging to achieve computable semantic interoperability with HL7 because of i) the complexity of the standard, requiring HL7 experts in the interface implementation process ii) inconsistencies and overlapping of the different HL7 information models (RIM, D-MIMs, R-MIMs, C-METs), and iii) instability of the different HL7 version 3 models. In this paper, an ontology-based service for health systems semantic interoperability is proposed. This service includes three main components: i) the conceptual model formalization component, responsible to represent the conceptual information models of the applications to be integrated as formal application ontologies; ii) the ontology mapper component; responsible to realize the semantic mapping between the formal application ontologies using a domain ontology, therefore solving inconsistencies found in the source application ontologies; (iii) the automatic interface generator, responsible to create and to maintain HL7 version 3 interfaces. The service presented in this paper is primary focused on the implementation of HL7 interfaces to integrate legacy systems. However being supported in an ontology-based mapping of HL7 information models, it can also support semantic interoperability among healthcare services and applications.

  10. An Ontology Service for Linked Environments for Atmospheric Discovery (LEAD)

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Movva, S.

    2005-12-01

    An Ontology encodes concepts and the relationships among them. From a machine learning perspective, it is viewed as a formal, explicit specification of a shared conceptualization. Linked Environments for Atmospheric Discovery (LEAD) is a large NSF Information Technology Research (ITR) initiative to provide scalable, integrated gird framework for use in accessing, preparing, assimilating, predicting, analyzing and managing a broad array of meteorological and related information independent of format and physical location. An ontology that focuses on mesoscale meteorology is currently being designed and developed for LEAD. It uses the Semantic Web for Earth and Environmental Terminology - ontology (SWEET, Rob Raskin - JPL) as a building block and additional concepts for mesoscale meteorology are being added. An Ontology Inference Service (OIS) is also developed to provide querying capabilities on the LEAD Ontology. The drivers for developing such an ontology and inference service specifically for LEAD are many. The LEAD ontology serves as a common vocabulary to allow interoperability for metadata exchange between different LEAD catalogs. Coupled with these LEAD catalogs, the OIS will also provide a 'yellow pages' search capability to the end users. The OIS provides capabilities to search for similar and related concepts for a particular concept. This is essentially, searching with semantic meanings rather than searching with keywords. Thus allowing users to search for datasets without actually having to know and use the specific data parameter names in the catalogs. Finally, the OIS serves as a stand-alone smart search system for the atmospheric domain, specifically mesoscale meteorology. This smart search service collates the definition of a user's search term, useful datasets, related concepts, useful websites and additional related information. It serves as an educational portal for both students and researchers in LEAD.

  11. Towards a Cross-domain Infrastructure to Support Electronic Identification and Capability Lookup for Cross-border ePrescription/Patient Summary Services.

    PubMed

    Katehakis, Dimitrios G; Masi, Massimiliano; Wisniewski, Francois; Bittins, Sören

    2016-01-01

    Seamless patient identification, as well as locating capabilities of remote services, are considered to be key enablers for large scale deployment of facilities to support the delivery of cross-border healthcare. This work highlights challenges investigated within the context of the Electronic Simple European Networked Services (e-SENS) large scale pilot (LSP) project, aiming to assist the deployment of cross-border, digital, public services through generic, re-usable technical components or Building Blocks (BBs). Through the case for the cross-border ePrescription/Patient Summary (eP/PS) service the paper demonstrates how experience coming from other domains, in regard to electronic identification (eID) and capability lookup, can be utilized in trying to raise technology readiness levels in disease diagnosis and treatment. The need for consolidating the existing outcomes of non-health specific BBs is examined, together with related issues that need to be resolved, for improving technical certainty and making it easier for citizens who travel to use innovative eHealth services, and potentially share personal health records (PHRs) with other providers abroad, in a regulated manner.

  12. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    PubMed

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  13. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  14. How Service Choreography Statistics Reduce the Ontology Mapping Problem

    NASA Astrophysics Data System (ADS)

    Besana, Paolo; Robertson, Dave

    In open and distributed environments ontology mapping provides interoperability between interacting actors. However, conventional mapping systems focus on acquiring static information, and on mapping whole ontologies, which is infeasible in open systems. This paper shows that the interactions themselves between the actors can be used to predict mappings, simplifying dynamic ontology mapping. The intuitive idea is that similar interactions follow similar conventions and patterns, which can be analysed. The computed model can be used to suggest the possible mappings for the exchanged messages in new interactions. The suggestions can be evaluate by any standard ontology matcher: if they are accurate, the matchers avoid evaluating mappings unrelated to the interaction.

  15. An Ontology for Learning Services on the Shop Floor

    ERIC Educational Resources Information Center

    Ullrich, Carsten

    2016-01-01

    An ontology expresses a common understanding of a domain that serves as a basis of communication between people or systems, and enables knowledge sharing, reuse of domain knowledge, reasoning and thus problem solving. In Technology-Enhanced Learning, especially in Intelligent Tutoring Systems and Adaptive Learning Environments, ontologies serve as…

  16. Formal specification of an ontology-based service for EHR interoperability.

    PubMed

    González, Carolina; Blobel, Bernd G M E; López, Diego M

    2012-01-01

    The objective of this paper is to describe by a Platform Independent Model, the formal specification of an ontology-based service for electronic health records interoperability. The GCM is used as a framework for the service's architectural design. The formal specification of the service is an extension of the OMG CTS 2 specification. A review of mapping approaches is also provided. The paper describes the service' information and computation models, including the mapping process workflow. The platform specific implementation (Platform Specific Model) is provided as a set of WSDL interfaces. The specification includes ontology mapping algorithms and tools needed.

  17. The Design and Engineering of Mobile Data Services: Developing an Ontology Based on Business Model Thinking

    NASA Astrophysics Data System (ADS)

    Al-Debei, Mutaz M.; Fitzgerald, Guy

    This paper addresses the design and engineering problem related to mobile data services. The aim of the research is to inform and advise mobile service design and engineering by looking at this issue from a rigorous and holistic perspective. To this aim, this paper develops an ontology based on business model thinking. The developed ontology identifies four primary dimensions in designing business models of mobile data services: value proposition, value network, value architecture, and value finance. Within these dimensions, 15 key design concepts are identified along with their interrelationships and rules in the telecommunication service business model domain and unambiguous semantics are produced. The developed ontology is of value to academics and practitioners alike, particularly those interested in strategic-oriented IS/IT and business developments in telecommunications. Employing the developed ontology would systemize mobile service engineering functions and make them more manageable, effective, and creative. The research approach to building the mobile service business model ontology essentially follows the design science paradigm. Within this paradigm, we incorporate a number of different research methods, so the employed methodology might be better characterized as a pluralist approach.

  18. The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG

    NASA Astrophysics Data System (ADS)

    Sun, S.; Liu, D.; Li, G.; Yu, W.

    2011-08-01

    The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.

  19. Generic-distributed framework for cloud services marketplace based on unified ontology.

    PubMed

    Hasan, Samer; Valli Kumari, V

    2017-11-01

    Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.

  20. An Ontological Consideration on Essential Properties of the Notion of ``Service"

    NASA Astrophysics Data System (ADS)

    Sumita, Kouhei; Kitamura, Yoshinobu; Sasajima, Munehiko; Takfuji, Sunao; Mizoguchi, Riichiro

    Although many definitions of services have been proposed in Service Science and Service Engineering, essentialities of the notion of ``service" remain unclear. Especially, some existing definitions of service are similar to the definition of function of artifacts, and there is no clear distinction between them. Thus, aiming at an ontological conceptualization of service, we have made an ontological investigation into the distinction between service and artifact function. In this article, we reveal essential properties of service and propose a model and a definition of service. Firstly, we extract 42 properties of service from 15 articles in different disciplines in order to find out fundamental concepts of service. Then we show that the notion of function shares the extracted foundational concepts of service and thus point out the necessity of the distinction between them. Secondly, we propose a multi-layered model of services, which is based on the conceptualization of goal-oriented effects at the base-level and at the upper-level. Thirdly, based on the model, we clarify essential properties of service which can distinguish artifact function. The conceptualization of upper-effects (upper-service) enables us to show that upper-services include various effects such as sales and manufacturing. Lastly, we propose a definition of the notion of service based on the essential properties and show its validity using some examples.

  1. Using Ontologies to Formalize Services Specifications in Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Breitman, Karin Koogan; Filho, Aluizio Haendchen; Haeusler, Edward Hermann

    2004-01-01

    One key issue in multi-agent systems (MAS) is their ability to interact and exchange information autonomously across applications. To secure agent interoperability, designers must rely on a communication protocol that allows software agents to exchange meaningful information. In this paper we propose using ontologies as such communication protocol. Ontologies capture the semantics of the operations and services provided by agents, allowing interoperability and information exchange in a MAS. Ontologies are a formal, machine processable, representation that allows to capture the semantics of a domain and, to derive meaningful information by way of logical inference. In our proposal we use a formal knowledge representation language (OWL) that translates into Description Logics (a subset of first order logic), thus eliminating ambiguities and providing a solid base for machine based inference. The main contribution of this approach is to make the requirements explicit, centralize the specification in a single document (the ontology itself), at the same that it provides a formal, unambiguous representation that can be processed by automated inference machines.

  2. An ontology-based collaborative service framework for agricultural information

    USDA-ARS?s Scientific Manuscript database

    In recent years, China has developed modern agriculture energetically. An effective information framework is an important way to provide farms with agricultural information services and improve farmer's production technology and their income. The mountain areas in central China are dominated by agri...

  3. OntoCAT -- simple ontology search and integration in Java, R and REST/JavaScript

    PubMed Central

    2011-01-01

    Background Ontologies have become an essential asset in the bioinformatics toolbox and a number of ontology access resources are now available, for example, the EBI Ontology Lookup Service (OLS) and the NCBO BioPortal. However, these resources differ substantially in mode, ease of access, and ontology content. This makes it relatively difficult to access each ontology source separately, map their contents to research data, and much of this effort is being replicated across different research groups. Results OntoCAT provides a seamless programming interface to query heterogeneous ontology resources including OLS and BioPortal, as well as user-specified local OWL and OBO files. Each resource is wrapped behind easy to learn Java, Bioconductor/R and REST web service commands enabling reuse and integration of ontology software efforts despite variation in technologies. It is also available as a stand-alone MOLGENIS database and a Google App Engine application. Conclusions OntoCAT provides a robust, configurable solution for accessing ontology terms specified locally and from remote services, is available as a stand-alone tool and has been tested thoroughly in the ArrayExpress, MOLGENIS, EFO and Gen2Phen phenotype use cases. Availability http://www.ontocat.org PMID:21619703

  4. OntoCAT--simple ontology search and integration in Java, R and REST/JavaScript.

    PubMed

    Adamusiak, Tomasz; Burdett, Tony; Kurbatova, Natalja; Joeri van der Velde, K; Abeygunawardena, Niran; Antonakaki, Despoina; Kapushesky, Misha; Parkinson, Helen; Swertz, Morris A

    2011-05-29

    Ontologies have become an essential asset in the bioinformatics toolbox and a number of ontology access resources are now available, for example, the EBI Ontology Lookup Service (OLS) and the NCBO BioPortal. However, these resources differ substantially in mode, ease of access, and ontology content. This makes it relatively difficult to access each ontology source separately, map their contents to research data, and much of this effort is being replicated across different research groups. OntoCAT provides a seamless programming interface to query heterogeneous ontology resources including OLS and BioPortal, as well as user-specified local OWL and OBO files. Each resource is wrapped behind easy to learn Java, Bioconductor/R and REST web service commands enabling reuse and integration of ontology software efforts despite variation in technologies. It is also available as a stand-alone MOLGENIS database and a Google App Engine application. OntoCAT provides a robust, configurable solution for accessing ontology terms specified locally and from remote services, is available as a stand-alone tool and has been tested thoroughly in the ArrayExpress, MOLGENIS, EFO and Gen2Phen phenotype use cases. http://www.ontocat.org.

  5. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  6. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  7. Observing health professionals' workflow patterns for diabetes care - First steps towards an ontology for EHR services.

    PubMed

    Schweitzer, M; Lasierra, N; Hoerbst, A

    2015-01-01

    Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.

  8. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems

  9. Law-Based Ontology for E-Government Services Construction - Case Study: The Specification of Services in Relationship with the Venture Creation in Switzerland

    NASA Astrophysics Data System (ADS)

    Khadraoui, Abdelaziz; Opprecht, Wanda; Léonard, Michel; Aïdonidis, Christine

    The compliance of e-government services with legal aspects is a crucial issue for administrations. This issue becomes more difficult with the fast-evolving dynamics of laws. This chapter presents our approach to describe and establish the link between e-government services and legal sources. This link is established by an ontology called “law-based ontology.” We use this ontology as means to define and to construct e-government services. The proposed approach is illustrated with one case study: the specification of services in relationship with the venture ­creation in Switzerland and in the State of Geneva. We have selected the Commercial Register area which mainly encompasses the registration of a new company and the modification of its registration.

  10. Ontology or formal ontology

    NASA Astrophysics Data System (ADS)

    Žáček, Martin

    2017-07-01

    Ontology or formal ontology? Which word is correct? The aim of this article is to introduce correct terms and explain their basis. Ontology describes a particular area of interest (domain) in a formal way - defines the classes of objects that are in that area, and relationships that may exist between them. Meaning of ontology consists mainly in facilitating communication between people, improve collaboration of software systems and in the improvement of systems engineering. Ontology in all these areas offer the possibility of unification of view, maintaining consistency and unambiguity.

  11. The @neurIST Ontology of Intracranial Aneurysms: Providing Terminological Services for an Integrated IT Infrastructure

    PubMed Central

    Boeker, Martin; Stenzhorn, Holger; Kumpf, Kai; Bijlenga, Philippe; Schulz, Stefan; Hanser, Susanne

    2007-01-01

    The @neurIST ontology is currently under development within the scope of the European project @neurIST intended to serve as a module in a complex architecture aiming at providing a better understanding and management of intracranial aneurysms and subarachnoid hemorrhages. Due to the integrative structure of the project the ontology needs to represent entities from various disciplines on a large spatial and temporal scale. Initial term acquisition was performed by exploiting a database scaffold, literature analysis and communications with domain experts. The ontology design is based on the DOLCE upper ontology and other existing domain ontologies were linked or partly included whenever appropriate (e.g., the FMA for anatomical entities and the UMLS for definitions and lexical information). About 2300 predominantly medical entities were represented but also a multitude of biomolecular, epidemiological, and hemodynamic entities. The usage of the ontology in the project comprises terminological control, text mining, annotation, and data mediation. PMID:18693797

  12. Designing an architecture for monitoring patients at home: ontologies and web services for clinical and technical management integration.

    PubMed

    Lasierra, Nelia; Alesanco, Álvaro; García, José

    2014-05-01

    This paper presents the design and implementation of an architecture based on the combination of ontologies, rules, web services, and the autonomic computing paradigm to manage data in home-based telemonitoring scenarios. The architecture includes two layers: 1) a conceptual layer and 2) a data and communication layer. On the one hand, the conceptual layer based on ontologies is proposed to unify the management procedure and integrate incoming data from all the sources involved in the telemonitoring process. On the other hand, the data and communication layer based on REST web service (WS) technologies is proposed to provide practical backup to the use of the ontology, to provide a real implementation of the tasks it describes and thus to provide a means of exchanging data (support communication tasks). A case study regarding chronic obstructive pulmonary disease data management is presented in order to evaluate the efficiency of the architecture. This proposed ontology-based solution defines a flexible and scalable architecture in order to address main challenges presented in home-based telemonitoring scenarios and thus provide a means to integrate, unify, and transfer data supporting both clinical and technical management tasks.

  13. Towards automated biomedical ontology harmonization.

    PubMed

    Uribe, Gustavo A; Lopez, Diego M; Blobel, Bernd

    2014-01-01

    The use of biomedical ontologies is increasing, especially in the context of health systems interoperability. Ontologies are key pieces to understand the semantics of information exchanged. However, given the diversity of biomedical ontologies, it is essential to develop tools that support harmonization processes amongst them. Several algorithms and tools are proposed by computer scientist for partially supporting ontology harmonization. However, these tools face several problems, especially in the biomedical domain where ontologies are large and complex. In the harmonization process, matching is a basic task. This paper explains the different ontology harmonization processes, analyzes existing matching tools, and proposes a prototype of an ontology harmonization service. The results demonstrate that there are many open issues in the field of biomedical ontology harmonization, such as: overcoming structural discrepancies between ontologies; the lack of semantic algorithms to automate the process; the low matching efficiency of existing algorithms; and the use of domain and top level ontologies in the matching process.

  14. Integrating Distributed Data Systems Using Ontologies, Web Services and Standards: An MMI Case Study

    NASA Astrophysics Data System (ADS)

    Graybeal, J.; Bermudez, L. E.; Gomes, K.; Godin, M.

    2005-12-01

    The Marine Metadata Interoperability (MMI) project promotes the exchange, integration and use of marine data through enhanced data publishing, discovery, documentation and accessibility. One of the goals of the MMI project for 2005 is to create a web application that can query distributed and heterogeneous data repositories using ontologies to solve the semantic heterogeneities, SOAP web services as transport protocols, and content standards such as those promoted by the Dublin Core Metadata Initiative. The MMI demonstration began by making available, in one portal, two heterogeneous and distributed data systems built by the Monterey Bay Aquarium Research Institute (MBARI). The two systems were the Shore-Side Data System (SSDS) and the Autonomous Ocean Sampling Network (AOSN). SSDS is a data management system designed to systematically collect and catalog both data streaming from deployed observatory instruments, and data contained in external data files. The AOSN data system facilitates data collection, storage, retrieval, discovery, and public access for the intensive, multi-institutional Monterey Bay field program in 2003. The systems use different data models, and different names identify similar data fields. We will present the process we followed to demonstrate an interoperable solution, and lessons learned during the course of the demonstration. The process included development of interoperable solutions for communication protocols, metadata content standards, and the vocabularies used to exchange the standard content. Simple interfaces were defined and iteratively improved, and vocabulary lists from each system (addressing parameters, instruments, and units of measurement) were exported and mapped. Similar processes have been advocated by MMI for a wide variety of interoperability challenges, and this demonstration represented the first experience using real world systems and data. From these lessons, we will improve a larger demonstration project, as well

  15. Towards Agile Ontology Maintenance

    NASA Astrophysics Data System (ADS)

    Luczak-Rösch, Markus

    Ontologies are an appropriate means to represent knowledge on the Web. Research on ontology engineering reached practices for an integrative lifecycle support. However, a broader success of ontologies in Web-based information systems remains unreached while the more lightweight semantic approaches are rather successful. We assume, paired with the emerging trend of services and microservices on the Web, new dynamic scenarios gain momentum in which a shared knowledge base is made available to several dynamically changing services with disparate requirements. Our work envisions a step towards such a dynamic scenario in which an ontology adapts to the requirements of the accessing services and applications as well as the user's needs in an agile way and reduces the experts' involvement in ontology maintenance processes.

  16. Webulous and the Webulous Google Add-On--a web service and application for ontology building from templates.

    PubMed

    Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James

    2016-01-01

    Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.

  17. Performing ontology.

    PubMed

    Aspers, Patrik

    2015-06-01

    Ontology, and in particular, the so-called ontological turn, is the topic of a recent themed issue of Social Studies of Science (Volume 43, Issue 3, 2013). Ontology, or metaphysics, is in philosophy concerned with what there is, how it is, and forms of being. But to what is the science and technology studies researcher turning when he or she talks of ontology? It is argued that it is unclear what is gained by arguing that ontology also refers to constructed elements. The 'ontological turn' comes with the risk of creating a pseudo-debate or pseudo-activity, in which energy is used for no end, at the expense of empirical studies. This text rebuts the idea of an ontological turn as foreshadowed in the texts of the themed issue. It argues that there is no fundamental qualitative difference between the ontological turn and what we know as constructivism.

  18. Quantum ontologies

    SciTech Connect

    Stapp, H.P.

    1988-12-01

    Quantum ontologies are conceptions of the constitution of the universe that are compatible with quantum theory. The ontological orientation is contrasted to the pragmatic orientation of science, and reasons are given for considering quantum ontologies both within science, and in broader contexts. The principal quantum ontologies are described and evaluated. Invited paper at conference: Bell's Theorem, Quantum Theory, and Conceptions of the Universe, George Mason University, October 20-21, 1988. 16 refs.

  19. DEDUCE Clinical Text: An Ontology-based Module to Support Self-Service Clinical Notes Exploration and Cohort Development.

    PubMed

    Roth, Christopher; Rusincovitch, Shelley A; Horvath, Monica M; Brinson, Stephanie; Evans, Steve; Shang, Howard C; Ferranti, Jeffrey M

    2013-01-01

    Large amounts of information, as well as opportunities for informing research, education, and operations, are contained within clinical text such as radiology reports and pathology reports. However, this content is less accessible and harder to leverage than structured, discrete data. We report on an extension to the Duke Enterprise Data Unified Content Explorer (DEDUCE), a self-service query tool developed to provide clinicians and researchers with access to data within the Duke Medicine Enterprise Data Warehouse (EDW). The DEDUCE Clinical Text module supports ontology-based text searching, enhanced filtering capabilities based on document attributes, and integration of clinical text with structured data and cohort development. The module is implemented with open-source tools extensible to other institutions, including a Java-based search engine (Apache Solr) with complementary full-text indexing library (Lucene) employed with a negation engine (NegEx) modified by clinical users to include to local domain-specific negation phrases.

  20. A piecewise lookup table for calculating nonbonded pairwise atomic interactions.

    PubMed

    Luo, Jinping; Liu, Lijun; Su, Peng; Duan, Pengbo; Lu, Daihui

    2015-11-01

    A critical challenge for molecular dynamics simulations of chemical or biological systems is to improve the calculation efficiency while retaining sufficient accuracy. The main bottleneck in improving the efficiency is the evaluation of nonbonded pairwise interactions. We propose a new piecewise lookup table method for rapid and accurate calculation of interatomic nonbonded pairwise interactions. The piecewise lookup table allows nonuniform assignment of table nodes according to the slope of the potential function and the pair interaction distribution. The proposed method assigns the nodes more reasonably than in general lookup tables, and thus improves the accuracy while requiring fewer nodes. To obtain the same level of accuracy, our piecewise lookup table accelerates the calculation via the efficient usage of cache memory. This new method is straightforward to implement and should be broadly applicable. Graphical Abstract Illustration of piecewise lookup table method.

  1. EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats

    PubMed Central

    Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter

    2013-01-01

    Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: jison@ebi.ac.uk PMID:23479348

  2. Tool Support for Software Lookup Table Optimization

    DOE PAGES

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.

    2011-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less

  3. Domain enhanced lookup time accelerated BLAST.

    PubMed

    Boratyn, Grzegorz M; Schäffer, Alejandro A; Agarwala, Richa; Altschul, Stephen F; Lipman, David J; Madden, Thomas L

    2012-04-17

    BLAST is a commonly-used software package for comparing a query sequence to a database of known sequences; in this study, we focus on protein sequences. Position-specific-iterated BLAST (PSI-BLAST) iteratively searches a protein sequence database, using the matches in round i to construct a position-specific score matrix (PSSM) for searching the database in round i + 1. Biegert and Söding developed Context-sensitive BLAST (CS-BLAST), which combines information from searching the sequence database with information derived from a library of short protein profiles to achieve better homology detection than PSI-BLAST, which builds its PSSMs from scratch. We describe a new method, called domain enhanced lookup time accelerated BLAST (DELTA-BLAST), which searches a database of pre-constructed PSSMs before searching a protein-sequence database, to yield better homology detection. For its PSSMs, DELTA-BLAST employs a subset of NCBI's Conserved Domain Database (CDD). On a test set derived from ASTRAL, with one round of searching, DELTA-BLAST achieves a ROC5000 of 0.270 vs. 0.116 for CS-BLAST. The performance advantage diminishes in iterated searches, but DELTA-BLAST continues to achieve better ROC scores than CS-BLAST. DELTA-BLAST is a useful program for the detection of remote protein homologs. It is available under the "Protein BLAST" link at http://blast.ncbi.nlm.nih.gov.

  4. Tool support for software lookup table optimization

    PubMed Central

    Strout, Michelle Mills; Bieman, James M.

    2012-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches. PMID:24532963

  5. Design of a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oriented clustering case-based reasoning mechanism.

    PubMed

    Ku, Hao-Hsiang

    2015-01-01

    Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers.

  6. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    Access to Earth Observation products remains not at all straightforward for end users in most domains. Semantically-enabled search engines, generally accessible through Web portals, have been developed. They allow searching for products by selecting application-specific terms and specifying basic geographical and temporal filtering criteria. Although this mostly suits the needs of the general public, the scientific communities require more advanced and controlled means to find products. Ranges of validity, traceability (e.g. origin, applied algorithms), accuracy, uncertainty, are concepts that are typically taken into account in research activities. The Prod-Trees (Enriching Earth Observation Ontology Services using Product Trees) project will enhance the CF-netCDF product format and vocabulary to allow storing metadata that better describe the products, and in particular EO products. The project will bring a standardized solution that permits annotating EO products in such a manner that official and third-party software libraries and tools will be able to search for products using advanced tags and controlled parameter names. Annotated EO products will be automatically supported by all the compatible software. Because the entire product information will come from the annotations and the standards, there will be no need for integrating extra components and data structures that have not been standardized. In the course of the project, the most important and popular open-source software libraries and tools will be extended to support the proposed extensions of CF-netCDF. The result will be provided back to the respective owners and maintainers for ensuring the best dissemination and adoption of the extended format. The project, funded by ESA, has started in December 2012 and will end in May 2014. It is coordinated by Space Applications Services, and the Consortium includes CNR-IIA and the National and Kapodistrian University of Athens. The first activities included

  7. The Ontology for Biomedical Investigations.

    PubMed

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  8. The Ontology for Biomedical Investigations

    PubMed Central

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H.; Chibucos, Marcus C.; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A.; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L.; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A.; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H.; Schober, Daniel; Smith, Barry; Soldatova, Larisa N.; Stoeckert, Christian J.; Taylor, Chris F.; Torniai, Carlo; Turner, Jessica A.; Vita, Randi; Whetzel, Patricia L.; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  9. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  10. Ontological analysis of SNOMED CT.

    PubMed

    Héja, Gergely; Surján, György; Varga, Péter

    2008-10-27

    SNOMED CT is the most comprehensive medical terminology. However, its use for intelligent services based on formal reasoning is questionable. The analysis of the structure of SNOMED CT is based on the formal top-level ontology DOLCE. The analysis revealed several ontological and knowledge-engineering errors, the most important are errors in the hierarchy (mostly from an ontological point of view, but also regarding medical aspects) and the mixing of subsumption relations with other types (mostly 'part of'). The found errors impede formal reasoning. The paper presents a possible way to correct these problems.

  11. Ontological analysis of SNOMED CT

    PubMed Central

    Héja, Gergely; Surján, György; Varga, Péter

    2008-01-01

    Background SNOMED CT is the most comprehensive medical terminology. However, its use for intelligent services based on formal reasoning is questionable. Methods The analysis of the structure of SNOMED CT is based on the formal top-level ontology DOLCE. Results The analysis revealed several ontological and knowledge-engineering errors, the most important are errors in the hierarchy (mostly from an ontological point of view, but also regarding medical aspects) and the mixing of subsumption relations with other types (mostly 'part of'). Conclusion The found errors impede formal reasoning. The paper presents a possible way to correct these problems. PMID:19007445

  12. A Table Look-Up Parser in Online ILTS Applications

    ERIC Educational Resources Information Center

    Chen, Liang; Tokuda, Naoyuki; Hou, Pingkui

    2005-01-01

    A simple table look-up parser (TLUP) has been developed for parsing and consequently diagnosing syntactic errors in semi-free formatted learners' input sentences of an intelligent language tutoring system (ILTS). The TLUP finds a parse tree for a correct version of an input sentence, diagnoses syntactic errors of the learner by tracing and…

  13. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    PubMed Central

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy

  14. Efficient halftoning based on multiple look-up tables.

    PubMed

    Guo, Jing-Ming; Liu, Yun-Fu; Chang, Jia-Yu; Lee, Jiann-Der

    2013-11-01

    Look-up table (LUT) halftoning is an efficient way to construct halftone images and approximately simulate the dot distribution of the learned halftone image set. In this paper, a general mechanism named multiple look-up table (MLUT) halftoning is proposed to generate the halftones of direct binary search (DBS), whereas the high efficient characteristic of the LUT is still preserved. In the MLUT, the standard deviation is adopted as an important feature to classify various tables. In addition, the proposed quick standard deviation evaluation is employed to yield an extremely low computational complexity in calculating the standard deviation. In the parameter optimization, the autocorrelation is adopted because it can fully characterize the periodicity of dot distribution. Experimental results demonstrate that the dot distribution generated by the proposed method approximates to that of the DBS, which enables the proposed scheme as a very competitive candidate in the copying and printing industry.

  15. NCBO Ontology Recommender 2.0: an enhanced approach for biomedical ontology recommendation.

    PubMed

    Martínez-Romero, Marcos; Jonquet, Clement; O'Connor, Martin J; Graybeal, John; Pazos, Alejandro; Musen, Mark A

    2017-06-07

    Ontologies and controlled terminologies have become increasingly important in biomedical research. Researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability across disparate datasets. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms. We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a novel recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four different criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data. Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies to use together. It also can be customized to fit the needs of different ontology recommendation scenarios. Ontology Recommender 2.0 suggests relevant ontologies for annotating biomedical text data. It combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability

  16. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  17. BRAF Pyrosequencing Analysis Aided by a Lookup Table

    PubMed Central

    Olson, Matthew T.; Harrington, Colleen; Beierl, Katie; Chen, Guoli; Thiess, Michele; O'Neill, Alan; Taube, Janis M.; Zeiger, Martha A.; Lin, Ming-Tseh; Eshleman, James R.

    2015-01-01

    Objectives BRAF mutations have substantial therapeutic, diagnostic, and prognostic significance, so detecting and specifying them is an important part of the workload of molecular pathology laboratories. Pyrosequencing assays are well suited for this analysis but can produce complex results. Therefore, we introduce a pyrosequencing lookup table based on Pyromaker that assists the user in generating hypotheses for solving complex pyrosequencing results. Methods The lookup table contains all known mutations in the sequenced region and the positions in the dispensation sequence at which changes would occur with those mutations. We demonstrate the lookup table using a homebrew dispensation sequence for BRAF codons 596 to 605 as well as a commercially available kit-based dispensation sequence for codons 599 to 600. Results These results demonstrate that the homebrew dispensation sequence unambiguously identifies all known BRAF mutations in this region, whereas the kit-based dispensation sequence has one unresolvable degeneracy that could be solved with the addition of two injections. Conclusions Using the lookup table and confirmatory virtual pyrogram, we unambiguously solved clinical pyrograms of the complex mutations V600K (c.1798_1799delGTinsAA), V600R (c.1798_1799delGTinsAG), V600D (c.1799_1800delTGinsAT), V600E (c.1799_1800delTGinsAA), and V600_K601delinsE (c.1799_1801delTGA). In addition, we used the approach to hypothesize and confirm a new mutation in human melanoma, V600_K601delinsEI (c.1799_1802delTGAAinsAAAT). PMID:24713734

  18. Generating Lookup Tables from the AE9/AP9 Models

    DTIC Science & Technology

    2015-06-16

    Multiple orbits ranging from a LEO Sun synchronous orbit at a 350 km altitude to a Tundra orbit were looked at to determine the agreement of the...low Earth orbit (LEO) sun - synchronous orbit at a 350-km altitude to a Tundra orbit were looked at to determine the agreement of the static lookup...11 3.7 Sun Synchronous Orbit at 350 km Results

  19. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  20. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  1. The National Center for Biomedical Ontology

    PubMed Central

    Noy, Natalya F; Shah, Nigam H; Whetzel, Patricia L; Chute, Christopher G; Story, Margaret-Anne; Smith, Barry

    2011-01-01

    The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data. PMID:22081220

  2. The National Center for Biomedical Ontology.

    PubMed

    Musen, Mark A; Noy, Natalya F; Shah, Nigam H; Whetzel, Patricia L; Chute, Christopher G; Story, Margaret-Anne; Smith, Barry

    2012-01-01

    The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data.

  3. Use of the CIM Ontology

    SciTech Connect

    Neumann, Scott; Britton, Jay; Devos, Arnold N.; Widergren, Steven E.

    2006-02-08

    There are many uses for the Common Information Model (CIM), an ontology that is being standardized through Technical Committee 57 of the International Electrotechnical Commission (IEC TC57). The most common uses to date have included application modeling, information exchanges, information management and systems integration. As one should expect, there are many issues that become apparent when the CIM ontology is applied to any one use. Some of these issues are shortcomings within the current draft of the CIM, and others are a consequence of the different ways in which the CIM can be applied using different technologies. As the CIM ontology will and should evolve, there are several dangers that need to be recognized. One is overall consistency and impact upon applications when extending the CIM for a specific need. Another is that a tight coupling of the CIM to specific technologies could limit the value of the CIM in the longer term as an ontology, which becomes a larger issue over time as new technologies emerge. The integration of systems is one specific area of interest for application of the CIM ontology. This is an area dominated by the use of XML for the definition of messages. While this is certainly true when using Enterprise Application Integration (EAI) products, it is even more true with the movement towards the use of Web Services (WS), Service-Oriented Architectures (SOA) and Enterprise Service Buses (ESB) for integration. This general IT industry trend is consistent with trends seen within the IEC TC57 scope of power system management and associated information exchange. The challenge for TC57 is how to best leverage the CIM ontology using the various XML technologies and standards for integration. This paper will provide examples of how the CIM ontology is used and describe some specific issues that should be addressed within the CIM in order to increase its usefulness as an ontology. It will also describe some of the issues and challenges that will

  4. Simple Ontology Format (SOFT)

    SciTech Connect

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  5. Intepretation of tomographic images using automatic atlas lookup

    NASA Astrophysics Data System (ADS)

    Schiemann, Thomas; Hoehne, Karl H.; Koch, Christoph; Pommert, Andreas; Riemer, Martin; Schubert, Rainer; Tiede, Ulf

    1994-09-01

    We describe a system that automates atlas look-up when viewing cross-sectional images at a viewing station. Using simple specification of landmarks a linear transformation to a volume based anatomical atlas is performed. As a result corresponding atlas pictures containing information about structures, function, or blood supply, or classical atlas pages (like Talairach) appear next to the patient data for any chosen slice. In addition the slices are visible in the 3D context of the VOXEL-MAN 3D atlas, providing all its functionality.

  6. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  7. Indexing of multidimensional lookup tables in embedded systems.

    PubMed

    Vrhel, Michael J

    2004-10-01

    The proliferation of color devices and the desire to have them accurately communicate color information has led to a need for embedded systems that perform color conversions. A common method for performing color space conversions is to characterize the device with a multidimensional lookup table (MLUT). To reduce cost, many of the embedded systems have limited computational abilities. This leads to a need for the design of efficient methods for performing MLUT indexing and interpolation. This paper examines and compares two methods of MLUT indexing within embedded systems. The comparison is made in terms of colorimetric accuracy and computational cost.

  8. Aber-OWL: a framework for ontology-based data access in biology.

    PubMed

    Hoehndorf, Robert; Slater, Luke; Schofield, Paul N; Gkoutos, Georgios V

    2015-01-28

    Many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within these ontologies relies on the use of automated reasoning. We have developed the Aber-OWL infrastructure that provides reasoning services for bio-ontologies. Aber-OWL consists of an ontology repository, a set of web services and web interfaces that enable ontology-based semantic access to biological data and literature. Aber-OWL is freely available at http://aber-owl.net . Aber-OWL provides a framework for automatically accessing information that is annotated with ontologies or contains terms used to label classes in ontologies. When using Aber-OWL, access to ontologies and data annotated with them is not merely based on class names or identifiers but rather on the knowledge the ontologies contain and the inferences that can be drawn from it.

  9. A Pipelined IP Address Lookup Module for 100 Gbps Line Rates and beyond

    NASA Astrophysics Data System (ADS)

    Teuchert, Domenic; Hauger, Simon

    New Internet services and technologies call for higher packet switching capacities in the core network. Thus, a performance bottleneck arises at the backbone routers, as forwarding of Internet Protocol (IP) packets requires to search the most specific entry in a forwarding table that contains up to several hundred thousand address prefixes. The Tree Bitmap algorithm provides a well-balanced solution in respect of storage needs as well as of search and update complexity. In this paper, we present a pipelined lookup module based on this algorithm, which allows for an easy adaption to diverse protocol and hardware constraints. We determined the pipelining degree required to achieve the throughput for a 100 Gbps router line card by analyzing a representative sub-unit for various configured sizes. The module supports IPv4 and IPv6 configurations providing this throughput, as we determined the performance of our design to achieve a processing rate of 178 million packets per second.

  10. Datamining with Ontologies.

    PubMed

    Hoehndorf, Robert; Gkoutos, Georgios V; Schofield, Paul N

    2016-01-01

    The use of ontologies has increased rapidly over the past decade and they now provide a key component of most major databases in biology and biomedicine. Consequently, datamining over these databases benefits from considering the specific structure and content of ontologies, and several methods have been developed to use ontologies in datamining applications. Here, we discuss the principles of ontology structure, and datamining methods that rely on ontologies. The impact of these methods in the biological and biomedical sciences has been profound and is likely to increase as more datasets are becoming available using common, shared ontologies.

  11. Research on the complex network of the UNSPSC ontology

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Zou, Shengrong; Gu, Aihua; Wei, Li; Zhou, Ta

    The UNSPSC ontology mainly applies to the classification system of the e-business and governments buying the worldwide products and services, and supports the logic structure of classification of the products and services. In this paper, the related technologies of the complex network were applied to analyzing the structure of the ontology. The concept of the ontology was corresponding to the node of the complex network, and the relationship of the ontology concept was corresponding to the edge of the complex network. With existing methods of analysis and performance indicators in the complex network, analyzing the degree distribution and community of the ontology, and the research will help evaluate the concept of the ontology, classify the concept of the ontology and improve the efficiency of semantic matching.

  12. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  13. Assessment Applications of Ontologies.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Niemi, David; Bewley, William L.

    This paper discusses the use of ontologies and their applications to assessment. An ontology provides a shared and common understanding of a domain that can be communicated among people and computational systems. The ontology captures one or more experts' conceptual representation of a domain expressed in terms of concepts and the relationships…

  14. Optical CAM architecture for address lookup at 10 Gbps

    NASA Astrophysics Data System (ADS)

    Maniotis, P.; Terzenidis, N.; Pleros, N.

    2017-02-01

    Content Addressable Memories (CAMs) are widely used in nowadays router applications due to their fast bit searching capabilities. However, address loop-up operation cannot still keep up with high data-rate speeds of optical packet payload due to the limited speeds offered by electronic technology, which hardly can reach a few GHz. Despite this limitation, optics has still not managed to penetrate in the area of address look-up and forwarding operations due to the complete lack of optical CAM-based solutions. To the best of our knowledge, the first all-optical binary CAM cell has been only recently experimentally demonstrated by our group using an all-optical monolithically integrated InP Flip-Flop and an optical XOR gate, revealing error-free operation at 10 Gbps for both Content Addressing and Content Writing operations. In this paper, we extend our previous work by presenting for the first time to our knowledge an all-optical Ternary CAM cell architecture that allows also for a third matching state of "X" or "don't care", thus adding the necessary searching flexibility required by modern CAM-based solutions for supporting subnet-masked addresses. Moreover, we exploit the optical Ternary CAM cell towards deploying a complete CAM row formed by 4 Ternary CAM cells, demonstrating its operation through VPI simulations at 10 Gbps for an indicative 2 bit packet address and for both Content Addressing and Content Writing functionalities. The potential of this memory architecture to allow for up to 40 Gbps operation could presumably lead to fast CAM-based routing applications by enabling all-optical Address Lookup schemes.

  15. A hierarchical P2P overlay network for interest-based media contents lookup

    NASA Astrophysics Data System (ADS)

    Lee, HyunRyong; Kim, JongWon

    2006-10-01

    We propose a P2P (peer-to-peer) overlay architecture, called IGN (interest grouping network), for contents lookup in the DHC (digital home community), which aims to provide a formalized home-network-extended construction of current P2P file sharing community. The IGN utilizes the Chord and de Bruijn graph for its hierarchical overlay network construction. By combining two schemes and by inheriting its features, the IGN efficiently supports contents lookup. More specifically, by introducing metadata-based lookup keyword, the IGN offers detailed contents lookup that can reflect the user interests. Moreover, the IGN tries to reflect home network environments of DHC by utilizing HG (home gateway) of each home network as a participating node of the IGN. Through experimental and analysis results, we show that the IGN is more efficient than Chord, a well-known DHT (distributed hash table)-based lookup protocol.

  16. Semantic Web Services with Web Ontology Language (OWL-S) - Specification of Agent-Services for DARPA Agent Markup Language (DAML)

    DTIC Science & Technology

    2006-08-01

    member of an OWL class for bookselling services. (For advertising purposes, such a class would be a subclass of Profile.). The second way to...what the service does. For example, the capability of Barnes and Noble, a bookseller , is to sell books. The capability of a Web Service can be viewed...different services with similar capabilities. For example, a requester may prefer a bookseller that has a Dunn and Bradstreet quality rating. In order to

  17. An Ontology of Therapies

    NASA Astrophysics Data System (ADS)

    Eccher, Claudio; Ferro, Antonella; Pisanelli, Domenico M.

    Ontologies are the essential glue to build interoperable systems and the talk of the day in the medical community. In this paper we present the ontology of medical therapies developed in the course of the Oncocure project, aimed at building a guideline based decision support integrated with a legacy Electronic Patient Record (EPR). The therapy ontology is based upon the DOLCE top level ontology. It is our opinion that our ontology, besides constituting a model capturing the precise meaning of therapy-related concepts, can serve for several practical purposes: interfacing automatic support systems with a legacy EPR, allowing the automatic data analysis, and controlling possible medical errors made during EPR data input.

  18. Bringing Ontology to the Gene Ontology

    PubMed Central

    Andersen, William

    2003-01-01

    We present an analysis of some considerations involved in expressing the Gene Ontology (GO) as a machine-processible ontology, reflecting principles of formal ontology. GO is a controlled vocabulary that is intended to facilitate communication between biologists by standardizing usage of terms in database annotations. Making such controlled vocabularies maximally useful in support of bioinformatics applications requires explicating in machine-processible form the implicit background information that enables human users to interpret the meaning of the vocabulary terms. In the case of GO, this process would involve rendering the meanings of GO into a formal (logical) language with the help of domain experts, and adding additional information required to support the chosen formalization. A controlled vocabulary augmented in these ways is commonly called an ontology. In this paper, we make a modest exploration to determine the ontological requirements for this extended version of GO. Using the terms within the three GO hierarchies (molecular function, biological process and cellular component), we investigate the facility with which GO concepts can be ontologized, using available tools from the philosophical and ontological engineering literature. PMID:18629099

  19. Examples of Ontology

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    In the previous chapters we introduced the basic concepts of MOF-based languages for developing ontologies, such as the Ontology Definition Metamodel (ODM) and the Ontology UML Profile (OUP). We also discussed mappings between those languages and the OWL language. The purpose of this chapter is to illustrate the use of MOF-based languages for developing real-world ontologies. Here we discuss two different ontologies that we developed in different domains. The first example is a Petri net ontology that formalizes the representation of Petri nets, a well-known tool for modeling, simulation, and analysis of systems and processes. This Petri net ontology overcomes the syntactic constraints of the present XMLbased standard for sharing Petri net models, namely Petri Net Markup Language.

  20. The Proteasix Ontology.

    PubMed

    Arguello Casteleiro, Mercedes; Klein, Julie; Stevens, Robert

    2016-06-04

    The Proteasix Ontology (PxO) is an ontology that supports the Proteasix tool; an open-source peptide-centric tool that can be used to predict automatically and in a large-scale fashion in silico the proteases involved in the generation of proteolytic cleavage fragments (peptides) The PxO re-uses parts of the Protein Ontology, the three Gene Ontology sub-ontologies, the Chemical Entities of Biological Interest Ontology, the Sequence Ontology and bespoke extensions to the PxO in support of a series of roles: 1. To describe the known proteases and their target cleaveage sites. 2. To enable the description of proteolytic cleaveage fragments as the outputs of observed and predicted proteolysis. 3. To use knowledge about the function, species and cellular location of a protease and protein substrate to support the prioritisation of proteases in observed and predicted proteolysis. The PxO is designed to describe the biological underpinnings of the generation of peptides. The peptide-centric PxO seeks to support the Proteasix tool by separating domain knowledge from the operational knowledge used in protease prediction by Proteasix and to support the confirmation of its analyses and results. The Proteasix Ontology may be found at: http://bioportal.bioontology.org/ontologies/PXO . This ontology is free and open for use by everyone.

  1. Progressive halftone watermarking using multilayer table lookup strategy.

    PubMed

    Guo, Jing-Ming; Lai, Guo-Hung; Wong, Koksheik; Chang, Li-Chung

    2015-07-01

    In this paper, a halftoning-based multilayer watermarking of low computational complexity is proposed. An additional data-hiding technique is also employed to embed multiple watermarks into the watermark to be embedded to improve the security and embedding capacity. At the encoder, the efficient direct binary search method is employed to generate 256 reference tables to ensure the output is in halftone format. Subsequently, watermarks are embedded by a set of optimized compressed tables with various textural angles for table lookup. At the decoder, the least mean square metric is considered to increase the differences among those generated phenotypes of the embedding angles and reduce the required number of dimensions for each angle. Finally, the naïve Bayes classifier is employed to collect the possibilities of multilayer information for classifying the associated angles to extract the embedded watermarks. These decoded watermarks can be further overlapped for retrieving the additional hidden-layer watermarks. Experimental results show that the proposed method requires only 8.4 ms for embedding a watermark into an image of size 512×512 , under the 32-bit Windows 7 platform running on 4GB RAM, Intel core i7 Sandy Bridge with 4GB RAM and IDE Visual Studio 2010. Finally, only 2 MB is required to store the proposed compressed reference table.

  2. Ontology Construction Tool Kit

    DTIC Science & Technology

    2000-01-01

    for translating, loading, and saving ontologies encoded in a subset of KIF. The ontologies were loaded into Ocelot , which is a frame representation...KIF. The ontologies were loaded into Ocelot , which is a frame representation system. We extended our collaboration system to deal with schema...presented the KB content in many different ways. Second, once the HPKB-UL was loaded into our OKBC server, Ocelot , we used the GKB-Editor to comprehend

  3. Kuhn's Ontological Relativism.

    ERIC Educational Resources Information Center

    Sankey, Howard

    2000-01-01

    Discusses Kuhn's model of scientific theory change. Documents Kuhn's move away from conceptual relativism and rational relativism. Provides an analysis of his present ontological form of relativism. (CCM)

  4. Kuhn's Ontological Relativism.

    ERIC Educational Resources Information Center

    Sankey, Howard

    2000-01-01

    Discusses Kuhn's model of scientific theory change. Documents Kuhn's move away from conceptual relativism and rational relativism. Provides an analysis of his present ontological form of relativism. (CCM)

  5. Primer on Ontologies.

    PubMed

    Hastings, Janna

    2017-01-01

    As molecular biology has increasingly become a data-intensive discipline, ontologies have emerged as an essential computational tool to assist in the organisation, description and analysis of data. Ontologies describe and classify the entities of interest in a scientific domain in a computationally accessible fashion such that algorithms and tools can be developed around them. The technology that underlies ontologies has its roots in logic-based artificial intelligence, allowing for sophisticated automated inference and error detection. This chapter presents a general introduction to modern computational ontologies as they are used in biology.

  6. Fast radiative transfer using monochromatic look-up tables

    NASA Astrophysics Data System (ADS)

    Anthony Vincent, R.; Dudhia, Anu

    2017-01-01

    Line-by-line (LBL) methods of numerically solving the equations of radiative transfer can be inhibitingly slow. Operational trace gas retrieval schemes generally require much faster output than current LBL radiative transfer models can achieve. One option to speed up computation is to precalculate absorption cross sections for each absorbing gas on a fixed grid and interpolate. This work presents a general method for creating, compressing, and validating a set of individual look-up tables (LUTs) for the 11 most abundant trace gases to use the Reference Forward Model (RFM) to simulate radiances observed by the Infrared Atmospheric Sounding Interferometer (IASI) at a more operational pace. These LUTs allow the RFM to generate radiances more than 20 times faster than LBL mode and were rigorously validated for 80 different atmospheric scenarios chosen to represent variability indicative of Earth's atmosphere. More than 99% of all IASI simulated spectral channels had LUT interpolation errors of brightness temperature less than 0.02 K, several factors below the IASI noise level. Including a reduced spectral grid for radiative transfer speed up the computation by another factor of six at the expense of approximately doubling interpolation errors, still factors below IASI noise. Furthermore, a simple spectral compression scheme based upon linear interpolation is presented, which reduced the total LUT file size from 120 Gbytes to 5.6 Gbytes; a compression to just 4.4% of the original. These LUTs are openly available for use by the scientific community, whether using the RFM or to be incorporated into any forward model.

  7. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  8. The Ontology of Disaster.

    ERIC Educational Resources Information Center

    Thompson, Neil

    1995-01-01

    Explores some key existential or ontological concepts to show their applicability to the complex area of disaster impact as it relates to health and social welfare practice. Draws on existentialist philosophy, particularly that of John Paul Sartre, and introduces some key ontological concepts to show how they specifically apply to the experience…

  9. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  10. The Ontology of Disaster.

    ERIC Educational Resources Information Center

    Thompson, Neil

    1995-01-01

    Explores some key existential or ontological concepts to show their applicability to the complex area of disaster impact as it relates to health and social welfare practice. Draws on existentialist philosophy, particularly that of John Paul Sartre, and introduces some key ontological concepts to show how they specifically apply to the experience…

  11. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  12. Ayurveda research: Ontological challenges

    PubMed Central

    Nayak, Jayakrishna

    2012-01-01

    Collaborative research involving Ayurveda and the current sciences is undoubtedly an imperative and is emerging as an exciting horizon, particularly in basic sciences. Some work in this direction is already going on and outcomes are awaited with bated breath. For instance the ‘ASIIA (A Science Initiative In Ayurveda)’ projects of Dept of Science and Technology, Govt of India, which include studies such as Ayurvedic Prakriti and Genetics. Further intense and sustained collaborative research needs to overcome a subtle and fundamental challenge-the ontologic divide between Ayurveda and all the current sciences. Ontology, fundamentally, means existence; elaborated, ontology is a particular perspective of an object of existence and the vocabulary developed to share that perspective. The same object of existence is susceptible to several ontologies. Ayurveda and modern biomedical as well as other sciences belong to different ontologies, and as such, collaborative research cannot be carried out at required levels until a mutually acceptable vocabulary is developed. PMID:22529675

  13. BiOSS: A system for biomedical ontology selection.

    PubMed

    Martínez-Romero, Marcos; Vázquez-Naya, José M; Pereira, Javier; Pazos, Alejandro

    2014-04-01

    In biomedical informatics, ontologies are considered a key technology for annotating, retrieving and sharing the huge volume of publicly available data. Due to the increasing amount, complexity and variety of existing biomedical ontologies, choosing the ones to be used in a semantic annotation problem or to design a specific application is a difficult task. As a consequence, the design of approaches and tools addressed to facilitate the selection of biomedical ontologies is becoming a priority. In this paper we present BiOSS, a novel system for the selection of biomedical ontologies. BiOSS evaluates the adequacy of an ontology to a given domain according to three different criteria: (1) the extent to which the ontology covers the domain; (2) the semantic richness of the ontology in the domain; (3) the popularity of the ontology in the biomedical community. BiOSS has been applied to 5 representative problems of ontology selection. It also has been compared to existing methods and tools. Results are promising and show the usefulness of BiOSS to solve real-world ontology selection problems. BiOSS is openly available both as a web tool and a web service. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Development of an Adolescent Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min; Jeon, Eunjoo; Kim, Ae Ran; Lee, Joo Yun

    2015-01-01

    Depression in adolescence is associated with significant suicidality. Therefore, it is important to detect the risk for depression and provide timely care to adolescents. This study aims to develop an ontology for collecting and analyzing social media data about adolescent depression. This ontology was developed using the 'ontology development 101'. The important terms were extracted from several clinical practice guidelines and postings on Social Network Service. We extracted 777 terms, which were categorized into 'risk factors', 'sign and symptoms', 'screening', 'diagnosis', 'treatment', and 'prevention'. An ontology developed in this study can be used as a framework to understand adolescent depression using unstructured data from social media.

  15. The neurological disease ontology.

    PubMed

    Jensen, Mark; Cox, Alexander P; Chaudhry, Naveed; Ng, Marcus; Sule, Donat; Duncan, William; Ray, Patrick; Weinstock-Guttman, Bianca; Smith, Barry; Ruttenberg, Alan; Szigeti, Kinga; Diehl, Alexander D

    2013-12-06

    We are developing the Neurological Disease Ontology (ND) to provide a framework to enable representation of aspects of neurological diseases that are relevant to their treatment and study. ND is a representational tool that addresses the need for unambiguous annotation, storage, and retrieval of data associated with the treatment and study of neurological diseases. ND is being developed in compliance with the Open Biomedical Ontology Foundry principles and builds upon the paradigm established by the Ontology for General Medical Science (OGMS) for the representation of entities in the domain of disease and medical practice. Initial applications of ND will include the annotation and analysis of large data sets and patient records for Alzheimer's disease, multiple sclerosis, and stroke. ND is implemented in OWL 2 and currently has more than 450 terms that refer to and describe various aspects of neurological diseases. ND directly imports the development version of OGMS, which uses BFO 2. Term development in ND has primarily extended the OGMS terms 'disease', 'diagnosis', 'disease course', and 'disorder'. We have imported and utilize over 700 classes from related ontology efforts including the Foundational Model of Anatomy, Ontology for Biomedical Investigations, and Protein Ontology. ND terms are annotated with ontology metadata such as a label (term name), term editors, textual definition, definition source, curation status, and alternative terms (synonyms). Many terms have logical definitions in addition to these annotations. Current development has focused on the establishment of the upper-level structure of the ND hierarchy, as well as on the representation of Alzheimer's disease, multiple sclerosis, and stroke. The ontology is available as a version-controlled file at http://code.google.com/p/neurological-disease-ontology along with a discussion list and an issue tracker. ND seeks to provide a formal foundation for the representation of clinical and research data

  16. Lookup Tables Versus Stacked Rasch Analysis in Comparing Pre- and Postintervention Adult Strabismus-20 Data

    PubMed Central

    Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.

    2016-01-01

    Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524

  17. Application of Ontologies for Big Earth Data

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.

    2014-12-01

    Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know

  18. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  19. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  20. A Probabilistic Ontology Development Methodology

    DTIC Science & Technology

    2014-06-01

    for knowledge -sharing and reuse are explicit, logical and defensible • Standard ontological engineering methods provide insufficient support for...interoperability and provide support for automation. Today, ontologies are popular in areas such as the Semantic Web, knowledge engineering , Artificial...assigning probability to a class instantiation or representing a probability scheme using ontology constructs. Standard ontological engineering methods

  1. Data mining for ontology development.

    SciTech Connect

    Davidson, George S.; Strasburg, Jana; Stampf, David; Neymotin,Lev; Czajkowski, Carl; Shine, Eugene; Bollinger, James; Ghosh, Vinita; Sorokine, Alexandre; Ferrell, Regina; Ward, Richard; Schoenwald, David Alan

    2010-06-01

    A multi-laboratory ontology construction effort during the summer and fall of 2009 prototyped an ontology for counterfeit semiconductor manufacturing. This effort included an ontology development team and an ontology validation methods team. Here the third team of the Ontology Project, the Data Analysis (DA) team reports on their approaches, the tools they used, and results for mining literature for terminology pertinent to counterfeit semiconductor manufacturing. A discussion of the value of ontology-based analysis is presented, with insights drawn from other ontology-based methods regularly used in the analysis of genomic experiments. Finally, suggestions for future work are offered.

  2. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  3. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  4. Ontologies for Bioinformatics

    PubMed Central

    Schuurman, Nadine; Leszczynski, Agnieszka

    2008-01-01

    The past twenty years have witnessed an explosion of biological data in diverse database formats governed by heterogeneous infrastructures. Not only are semantics (attribute terms) different in meaning across databases, but their organization varies widely. Ontologies are a concept imported from computing science to describe different conceptual frameworks that guide the collection, organization and publication of biological data. An ontology is similar to a paradigm but has very strict implications for formatting and meaning in a computational context. The use of ontologies is a means of communicating and resolving semantic and organizational differences between biological databases in order to enhance their integration. The purpose of interoperability (or sharing between divergent storage and semantic protocols) is to allow scientists from around the world to share and communicate with each other. This paper describes the rapid accumulation of biological data, its various organizational structures, and the role that ontologies play in interoperability. PMID:19812775

  5. Applications of Ontology Design Patterns in Biomedical Ontologies

    PubMed Central

    Mortensen, Jonathan M.; Horridge, Matthew; Musen, Mark A.; Noy, Natalya F.

    2012-01-01

    Ontology design patterns (ODPs) are a proposed solution to facilitate ontology development, and to help users avoid some of the most frequent modeling mistakes. ODPs originate from similar approaches in software engineering, where software design patterns have become a critical aspect of software development. There is little empirical evidence for ODP prevalence or effectiveness thus far. In this work, we determine the use and applicability of ODPs in a case study of biomedical ontologies. We encoded ontology design patterns from two ODP catalogs. We then searched for these patterns in a set of eight ontologies. We found five patterns of the 69 patterns. Two of the eight ontologies contained these patterns. While ontology design patterns provide a vehicle for capturing formally reoccurring models and best practices in ontology design, we show that today their use in a case study of widely used biomedical ontologies is limited. PMID:23304337

  6. Applications of ontology design patterns in biomedical ontologies.

    PubMed

    Mortensen, Jonathan M; Horridge, Matthew; Musen, Mark A; Noy, Natalya F

    2012-01-01

    Ontology design patterns (ODPs) are a proposed solution to facilitate ontology development, and to help users avoid some of the most frequent modeling mistakes. ODPs originate from similar approaches in software engineering, where software design patterns have become a critical aspect of software development. There is little empirical evidence for ODP prevalence or effectiveness thus far. In this work, we determine the use and applicability of ODPs in a case study of biomedical ontologies. We encoded ontology design patterns from two ODP catalogs. We then searched for these patterns in a set of eight ontologies. We found five patterns of the 69 patterns. Two of the eight ontologies contained these patterns. While ontology design patterns provide a vehicle for capturing formally reoccurring models and best practices in ontology design, we show that today their use in a case study of widely used biomedical ontologies is limited.

  7. The gene ontology categorizer.

    PubMed

    Joslyn, Cliff A; Mniszewski, Susan M; Fulmer, Andy; Heaton, Gary

    2004-08-04

    The Gene Ontology Categorizer, developed jointly by the Los Alamos National Laboratory and Procter & Gamble Corp., provides a capability for the categorization task in the Gene Ontology (GO): given a list of genes of interest, what are the best nodes of the GO to summarize or categorize that list? The motivating question is from a drug discovery process, where after some gene expression analysis experiment, we wish to understand the overall effect of some cell treatment or condition by identifying 'where' in the GO the differentially expressed genes fall: 'clustered' together in one place? in two places? uniformly spread throughout the GO? 'high', or 'low'? In order to address this need, we view bio-ontologies more as combinatorially structured databases than facilities for logical inference, and draw on the discrete mathematics of finite partially ordered sets (posets) to develop data representation and algorithms appropriate for the GO. In doing so, we have laid the foundations for a general set of methods to address not just the categorization task, but also other tasks (e.g. distances in ontologies and ontology merger and exchange) in both the GO and other bio-ontologies (such as the Enzyme Commission database or the MEdical Subject Headings) cast as hierarchically structured taxonomic knowledge systems.

  8. The Ontology of the Gene Ontology

    PubMed Central

    Smith, Barry; Williams, Jennifer; Steffen, Schulze-Kremer

    2003-01-01

    The rapidly increasing wealth of genomic data has driven the development of tools to assist in the task of representing and processing information about genes, their products and their functions. One of the most important of these tools is the Gene Ontology (GO), which is being developed in tandem with work on a variety of bioinformatics databases. An examination of the structure of GO, however, reveals a number of problems, which we believe can be resolved by taking account of certain organizing principles drawn from philosophical ontology. We shall explore the results of applying such principles to GO with a view to improving GO’s consistency and coherence and thus its future applicability in the automated processing of biological data. PMID:14728245

  9. Efficient lookup table using a linear function of inverse distance squared.

    PubMed

    Jung, Jaewoon; Mori, Takaharu; Sugita, Yuji

    2013-10-30

    The major bottleneck in molecular dynamics (MD) simulations of biomolecules exist in the calculation of pairwise nonbonded interactions like Lennard-Jones and long-range electrostatic interactions. Particle-mesh Ewald (PME) method is able to evaluate long-range electrostatic interactions accurately and quickly during MD simulation. However, the evaluation of energy and gradient includes time-consuming inverse square roots and complementary error functions. To avoid such time-consuming operations while keeping accuracy, we propose a new lookup table for short-range interaction in PME by defining energy and gradient as a linear function of inverse distance squared. In our lookup table approach, densities of table points are inversely proportional to squared pair distances, enabling accurate evaluation of energy and gradient at small pair distances. Regardless of the inverse operation here, the new lookup table scheme allows fast pairwise nonbonded calculations owing to efficient usage of cache memory.

  10. Open Biomedical Ontology-based Medline exploration

    PubMed Central

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Song, Jean; Athey, Brian; Watson, Stanley J; Meng, Fan

    2009-01-01

    Background Effective Medline database exploration is critical for the understanding of high throughput experimental results and the development of novel hypotheses about the mechanisms underlying the targeted biological processes. While existing solutions enhance Medline exploration through different approaches such as document clustering, network presentations of underlying conceptual relationships and the mapping of search results to MeSH and Gene Ontology trees, we believe the use of multiple ontologies from the Open Biomedical Ontology can greatly help researchers to explore literature from different perspectives as well as to quickly locate the most relevant Medline records for further investigation. Results We developed an ontology-based interactive Medline exploration solution called PubOnto to enable the interactive exploration and filtering of search results through the use of multiple ontologies from the OBO foundry. The PubOnto program is a rich internet application based on the FLEX platform. It contains a number of interactive tools, visualization capabilities, an open service architecture, and a customizable user interface. It is freely accessible at: . PMID:19426463

  11. Does Look-up Frequency Help Reading Comprehension of EFL Learners? Two Empirical Studies of Electronic Dictionaries

    ERIC Educational Resources Information Center

    Koyama, Toshiko; Takeuchi, Osamu

    2007-01-01

    Two empirical studies were conducted in which the differences in Japanese EFL learners' look-up behavior between hand-held electronic dictionaries (EDs) and printed dictionaries (PDs) were investigated. We focus here on the relation between learners' look-up frequency and degree of reading comprehension of the text. In the first study, a total of…

  12. Does Look-up Frequency Help Reading Comprehension of EFL Learners? Two Empirical Studies of Electronic Dictionaries

    ERIC Educational Resources Information Center

    Koyama, Toshiko; Takeuchi, Osamu

    2007-01-01

    Two empirical studies were conducted in which the differences in Japanese EFL learners' look-up behavior between hand-held electronic dictionaries (EDs) and printed dictionaries (PDs) were investigated. We focus here on the relation between learners' look-up frequency and degree of reading comprehension of the text. In the first study, a total of…

  13. Ontological engineering versus metaphysics

    NASA Astrophysics Data System (ADS)

    Tataj, Emanuel; Tomanek, Roman; Mulawka, Jan

    2011-10-01

    It has been recognized that ontologies are a semantic version of world wide web and can be found in knowledge-based systems. A recent time survey of this field also suggest that practical artificial intelligence systems may be motivated by this research. Especially strong artificial intelligence as well as concept of homo computer can also benefit from their use. The main objective of this contribution is to present and review already created ontologies and identify the main advantages which derive such approach for knowledge management systems. We would like to present what ontological engineering borrows from metaphysics and what a feedback it can provide to natural language processing, simulations and modelling. The potential topics of further development from philosophical point of view is also underlined.

  14. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  15. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.

  16. A hybrid table look-up method for H.264/AVC coeff_token decoding

    NASA Astrophysics Data System (ADS)

    Liu, Suhua; Zhang, Yixiong; Lu, Min; Tang, Biyu

    2011-10-01

    In this paper, a hybrid table look-up method for H.264 Coeff_Token Decoding is presented. In the proposed method the probabilities of the codewords with various lengths are analyzed, and based on the statistics a hybrid look-up table is constructed. In the coeff_token decoding process, firstly, a few bits are read from the bit-stream, if a matched codeword is found in the first look-up table, the further look-up steps will be skipped. Otherwise, more bits need to be read and looked up in the second table, which is built upon the number of leading 0's before the first number one. Experimental results on the RTSM Emulation Baseboard ARM926 of RealView show that the proposed method speeds up CAVLD of H.264 by about 8% with more efficient memory utilization, when compared to the prefix-based decoding method. And compared with the pattern-search method based on hashing algorithms adopted in the newest version of FFMPEG, the proposed method reduces memory space by about 77%.

  17. Real-time color imaging system for NIR and visible based on neighborhood statistics lookup table

    NASA Astrophysics Data System (ADS)

    Wei, Sheng-yi; Jin, Zhen; Wang, Ling-xue; He, Yu; Zhou, Xing-guang

    2015-11-01

    The near infrared radiation is the main component of the solar radiation. It's widely used in the remote sensing, nightvision, spectral detection et al. The NIR images are usually monochromatic, while color images are benefit for scene reconstruction and object detection. In this paper a new computed color imaging method based on the neighborhood statistics lookup table for NIR and visible was presented, and its implementation system was built. The neighborhood statistics lookup table was established based on the neighborhood statistical properties of the image. The use of the neighborhood statistical properties can enriched the color transmission variables of the gray image. It obtained a colorful lookup table that could improve the effects of the color transfer and make the colorized image more natural. The proposed lookup table could also transfer the color details well for the neighborhood statistical information representing the texture of the image. The results shows that this method yields a color image with natural color appearance and it can be implemented in real-time.

  18. Cache directory lookup reader set encoding for partial cache line speculation support

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  19. OTO: Ontology Term Organizer.

    PubMed

    Huang, Fengqiong; Macklin, James A; Cui, Hong; Cole, Heather A; Endara, Lorena

    2015-02-15

    The need to create controlled vocabularies such as ontologies for knowledge organization and access has been widely recognized in various domains. Despite the indispensable need of thorough domain knowledge in ontology construction, most software tools for ontology construction are designed for knowledge engineers and not for domain experts to use. The differences in the opinions of different domain experts and in the terminology usages in source literature are rarely addressed by existing software. OTO software was developed based on the Agile principles. Through iterations of software release and user feedback, new features are added and existing features modified to make the tool more intuitive and efficient to use for small and large data sets. The software is open source and built in Java. Ontology Term Organizer (OTO; http://biosemantics.arizona.edu/OTO/ ) is a user-friendly, web-based, consensus-promoting, open source application for organizing domain terms by dragging and dropping terms to appropriate locations. The application is designed for users with specific domain knowledge such as biology but not in-depth ontology construction skills. Specifically OTO can be used to establish is_a, part_of, synonym, and order relationships among terms in any domain that reflects the terminology usage in source literature and based on multiple experts' opinions. The organized terms may be fed into formal ontologies to boost their coverage. All datasets organized on OTO are publicly available. OTO has been used to organize the terms extracted from thirty volumes of Flora of North America and Flora of China combined, in addition to some smaller datasets of different taxon groups. User feedback indicates that the tool is efficient and user friendly. Being open source software, the application can be modified to fit varied term organization needs for different domains.

  20. Geo-ontology design and its logic reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Yandong; Dai, Jingjing; Sheng, Jizhen; Zhou, Kai; Gong, Jianya

    2007-06-01

    With the increasing application of geographic information system (GIS), GIS is faced with the difficulty of efficient management and comprehensive application of the spatial information from different resources and in different forms. In order to solve these problems, ontology is introduced into GIS field as a concept model which can represent object on semantic and knowledge level. Ontology not only can describe spatial data more easily understood by computers in semantic encoding method, but also can integrate geographical data from different sources and in different forms for reasoning. In this paper, a geo-ontology "GeographicalSpace" is built with Web Ontology Language (OWL) after analyzing the research and application of geo-ontology. A geo-ontology reasoning framework is put forward in which three layers are designed. The three layers are presentation layer, semantic service layer and spatial application server layer. By using the geo-ontology repository module and reasoning module in this framework, some more complex spatial location relationships in depth can be mined out. At last, an experiment is designed to demonstrate geo-ontology's ability to execute more intelligent query that can't be implemented in traditional GIS.

  1. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2012-01-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  2. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2011-12-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  3. Biomedical ontologies: toward scientific debate.

    PubMed

    Maojo, V; Crespo, J; García-Remesal, M; de la Iglesia, D; Perez-Rey, D; Kulikowski, C

    2011-01-01

    Biomedical ontologies have been very successful in structuring knowledge for many different applications, receiving widespread praise for their utility and potential. Yet, the role of computational ontologies in scientific research, as opposed to knowledge management applications, has not been extensively discussed. We aim to stimulate further discussion on the advantages and challenges presented by biomedical ontologies from a scientific perspective. We review various aspects of biomedical ontologies going beyond their practical successes, and focus on some key scientific questions in two ways. First, we analyze and discuss current approaches to improve biomedical ontologies that are based largely on classical, Aristotelian ontological models of reality. Second, we raise various open questions about biomedical ontologies that require further research, analyzing in more detail those related to visual reasoning and spatial ontologies. We outline significant scientific issues that biomedical ontologies should consider, beyond current efforts of building practical consensus between them. For spatial ontologies, we suggest an approach for building "morphospatial" taxonomies, as an example that could stimulate research on fundamental open issues for biomedical ontologies. Analysis of a large number of problems with biomedical ontologies suggests that the field is very much open to alternative interpretations of current work, and in need of scientific debate and discussion that can lead to new ideas and research directions.

  4. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    NASA Astrophysics Data System (ADS)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  5. Theory of ontology and land use ontology construction

    NASA Astrophysics Data System (ADS)

    Zhou, Guofeng; Liu, Yongxue; Chao, Junjie; Shen, Chenhua; Yang, Hui

    2007-06-01

    It mainly presents the problems of data share in land use database construction. How to accurately define geographic classification expression and how to quickly and accurately express the user demand are plaguing problems of information system developer. The introduction of ontology and relevant technologies address the problem with a brand new perspective and provide a strong theoretical and methodological support. From the relevant ontology theoretical study, this paper summarizes the essence of the concept of ontology; and explores the type, role, method, formalization expression and tools of ontology. On the basis of existing research, the paper brings forward 5-step method of ontology building and then uses this method to build ontology in land use database construction. It also puts forward the notion model of land use database based on ontology.

  6. Dahlbeck and Pure Ontology

    ERIC Educational Resources Information Center

    Mackenzie, Jim

    2016-01-01

    This article responds to Johan Dahlbeck's "Towards a pure ontology: Children's bodies and morality" ["Educational Philosophy and Theory," vol. 46 (1), 2014, pp. 8-23 (EJ1026561)]. His arguments from Nietzsche and Spinoza do not carry the weight he supposes, and the conclusions he draws from them about pedagogy would be…

  7. POSet Ontology Categorizer

    SciTech Connect

    Miniszewski, Sue M.

    2005-03-01

    POSet Ontology Categorizer (POSOC) V1.0 The POSet Ontology Categorizer (POSOC) software package provides tools for creating and mining of poset-structured ontologies, such as the Gene Ontology (GO). Given a list of weighted query items (ex.genes,proteins, and/or phrases) and one or more focus nodes, POSOC determines the ordered set of GO nodes that summarize the query, based on selections of a scoring function, pseudo-distance measure, specificity level, and cluster determination. Pseudo-distance measures provided are minimum chain length, maximum chain length, average of extreme chain lengths, and average of all chain lengths. A low specificity level, such as -1 or 0, results in a general set of clusters. Increasing the specificity results in more specific results in more specific and lighter clusters. POSOC cluster results can be compared agaist known results by calculations of precision, recall, and f-score for graph neighborhood relationships. This tool has been used in understanding the function of a set of genes, finding similar genes, and annotating new proteins. The POSOC software consists of a set of Java interfaces, classes, and programs that run on Linux or Windows platforms. It incorporates graph classes from OpenJGraph (openjgraph.sourceforge.net).

  8. Dahlbeck and Pure Ontology

    ERIC Educational Resources Information Center

    Mackenzie, Jim

    2016-01-01

    This article responds to Johan Dahlbeck's "Towards a pure ontology: Children's bodies and morality" ["Educational Philosophy and Theory," vol. 46 (1), 2014, pp. 8-23 (EJ1026561)]. His arguments from Nietzsche and Spinoza do not carry the weight he supposes, and the conclusions he draws from them about pedagogy would be…

  9. Ontology, Language, and Culture

    ERIC Educational Resources Information Center

    Hyde, Richard Bruce

    The purpose of this essay is to consider some of the practical implications of Martin Heideger's view that "Language is the house of Being," for the academic study of cultural transformation and intercultural communication. The paper describes the ontological basis of Heidegger's work, and the inquiry into Being, and contains sections on…

  10. Biomedicine: an ontological dissection.

    PubMed

    Baronov, David

    2008-01-01

    Though ubiquitous across the medical social sciences literature, the term "biomedicine" as an analytical concept remains remarkably slippery. It is argued here that this imprecision is due in part to the fact that biomedicine is comprised of three interrelated ontological spheres, each of which frames biomedicine as a distinct subject of investigation. This suggests that, depending upon one's ontological commitment, the meaning of biomedicine will shift. From an empirical perspective, biomedicine takes on the appearance of a scientific enterprise and is defined as a derivative category of Western science more generally. From an interpretive perspective, biomedicine represents a symbolic-cultural expression whose adherence to the principles of scientific objectivity conceals an ideological agenda. From a conceptual perspective, biomedicine represents an expression of social power that reflects structures of power and privilege within capitalist society. No one perspective exists in isolation and so the image of biomedicine from any one presents an incomplete understanding. It is the mutually-conditioning interrelations between these ontological spheres that account for biomedicine's ongoing development. Thus, the ontological dissection of biomedicine that follows, with particular emphasis on the period of its formal crystallization in the latter nineteenth and early twentieth century, is intended to deepen our understanding of biomedicine as an analytical concept across the medical social sciences literature.

  11. Ontology, Language, and Culture

    ERIC Educational Resources Information Center

    Hyde, Richard Bruce

    The purpose of this essay is to consider some of the practical implications of Martin Heideger's view that "Language is the house of Being," for the academic study of cultural transformation and intercultural communication. The paper describes the ontological basis of Heidegger's work, and the inquiry into Being, and contains sections on…

  12. A full-spectrum k-distribution look-up table for radiative transfer in nonhomogeneous gaseous media

    NASA Astrophysics Data System (ADS)

    Wang, Chaojun; Ge, Wenjun; Modest, Michael F.; He, Boshu

    2016-01-01

    A full-spectrum k-distribution (FSK) look-up table has been constructed for gas mixtures within a certain range of thermodynamic states for three species, i.e., CO2, H2O and CO. The k-distribution of a mixture is assembled directly from the summation of the linear absorption coefficients of three species. The systematic approach to generate the table, including the generation of the pressure-based absorption coefficient and the generation of the k-distribution, is discussed. To efficiently obtain accurate k-values for arbitrary thermodynamic states from tabulated values, a 6-D linear interpolation method is employed. A large number of radiative heat transfer calculations have been carried out to test the accuracy of the FSK look-up table. Results show that, using the FSK look-up table can provide excellent accuracy compared to the exact results. Without the time-consuming process of assembling k-distribution from individual species plus mixing, using the FSK look-up table can save considerable computational cost. To evaluate the accuracy as well as the efficiency of the FSK look-up table, radiative heat transfer via a scaled Sandia D Flame is calculated to compare the CPU execution time using the FSK method based on the narrow-band database, correlations, and the look-up table. Results show that the FSK look-up table can provide a computationally cheap alternative without much sacrifice in accuracy.

  13. Benchmarking Ontologies: Bigger or Better?

    PubMed Central

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them. PMID:21249231

  14. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect

  15. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect

  16. The Ontological Reversal: A Figure of Thought of Importance for Science Education.

    ERIC Educational Resources Information Center

    Dahlin, Bo

    2003-01-01

    Investigated whether the "ontological reversal" described by E. Husserl, the tendency to view abstract mathematical models of phenomena as more real than the phenomena themselves, is present in the reasoning of pre-service science teachers. Findings for 23 pre-service teachers indicate the presence of the ontological reversal as a figure…

  17. The Ontological Reversal: A Figure of Thought of Importance for Science Education.

    ERIC Educational Resources Information Center

    Dahlin, Bo

    2003-01-01

    Investigated whether the "ontological reversal" described by E. Husserl, the tendency to view abstract mathematical models of phenomena as more real than the phenomena themselves, is present in the reasoning of pre-service science teachers. Findings for 23 pre-service teachers indicate the presence of the ontological reversal as a figure…

  18. Integrating the human phenotype ontology into HeTOP terminology-ontology server.

    PubMed

    Grosjean, Julien; Merabti, Tayeb; Soualmia, Lina F; Letord, Catherine; Charlet, Jean; Robinson, Peter N; Darmoni, Stéfan J

    2013-01-01

    The Human Phenotype Ontology (HPO) is a controlled vocabulary which provides phenotype data related to genes or diseases. The Health Terminology/Ontology Portal (HeTOP) is a tool dedicated to both human beings and computers to access and browse biomedical terminologies or ontologies (T/O). The objective of this work was to integrate the HPO into HeTOP in order to enhance both works. This integration is a success and allows users to search and browse the HPO with a dedicated interface. Furthermore, the HPO has been enhanced with the addition of content such as new synonyms, translations, mappings. Integrating T/O such as the HPO into HeTOP is a benefit to vocabularies because it allows enrichment of them and it is also a benefit for HeTOP which provides a better service to both humans and machines.

  19. Rehabilitation robotics ontology on the cloud.

    PubMed

    Dogmus, Zeynep; Papantoniou, Agis; Kilinc, Muhammed; Yildirim, Sibel A; Erdem, Esra; Patoglu, Volkan

    2013-06-01

    We introduce the first formal rehabilitation robotics ontology, called RehabRobo-Onto, to represent information about rehabilitation robots and their properties; and a software system RehabRobo-Query to facilitate access to this ontology. RehabRobo-Query is made available on the cloud, utilizing Amazon Web services, so that 1) rehabilitation robot designers around the world can add/modify information about their robots in RehabRobo-Onto, and 2) rehabilitation robot designers and physical medicine experts around the world can access the knowledge in RehabRobo-Onto by means of questions about robots, in natural language, with the guide of the intelligent userinterface of RehabRobo-Query. The ontology system consisting of RehabRobo-Onto and RehabRobo-Query is of great value to robot designers as well as physical therapists and medical doctors. On the one hand, robot designers can access various properties of the existing robots and to the related publications to further improve the state-of-the-art. On the other hand, physical therapists and medical doctors can utilize the ontology to compare rehabilitation robots and to identify the ones that serve best to cover their needs, or to evaluate the effects of various devices for targeted joint exercises on patients with specific disorders.

  20. Improved look-up table method of computer-generated holograms.

    PubMed

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  1. A Distributed Look-up Architecture for Text Mining Applications using MapReduce.

    PubMed

    Balkir, Atilla Soner; Foster, Ian; Rzhetsky, Andrey

    2011-11-01

    Text mining applications typically involve statistical models that require accessing and updating model parameters in an iterative fashion. With the growing size of the data, such models become extremely parameter rich, and naive parallel implementations fail to address the scalability problem of maintaining a distributed look-up table that maps model parameters to their values. We evaluate several existing alternatives to provide coordination among worker nodes in Hadoop [11] clusters, and suggest a new multi-layered look-up architecture that is specifically optimized for certain problem domains. Our solution exploits the power-law distribution characteristics of the phrase or n-gram counts in large corpora while utilizing a Bloom Filter [2], in-memory cache, and an HBase [12] cluster at varying levels of abstraction.

  2. Intrinsic fluorescence of protein in turbid media using empirical relation based on Monte Carlo lookup table

    NASA Astrophysics Data System (ADS)

    Einstein, Gnanatheepam; Udayakumar, Kanniyappan; Aruna, Prakasarao; Ganesan, Singaravelu

    2017-03-01

    Fluorescence of Protein has been widely used in diagnostic oncology for characterizing cellular metabolism. However, the intensity of fluorescence emission is affected due to the absorbers and scatterers in tissue, which may lead to error in estimating exact protein content in tissue. Extraction of intrinsic fluorescence from measured fluorescence has been achieved by different methods. Among them, Monte Carlo based method yields the highest accuracy for extracting intrinsic fluorescence. In this work, we have attempted to generate a lookup table for Monte Carlo simulation of fluorescence emission by protein. Furthermore, we fitted the generated lookup table using an empirical relation. The empirical relation between measured and intrinsic fluorescence is validated using tissue phantom experiments. The proposed relation can be used for estimating intrinsic fluorescence of protein for real-time diagnostic applications and thereby improving the clinical interpretation of fluorescence spectroscopic data.

  3. Mini Ontologies and Metadata Expressions

    NASA Astrophysics Data System (ADS)

    King, T. A.; Ritschel, B.

    2013-12-01

    Ontologies come in many forms and with a wide range of detail and specificity. Of particular interest in the realm of science are classification schemes or taxonomies. Within general science domains there may be multiple taxonomies. Each taxonomy can be represented as a very narrowly defined domain ontology. We call such ontologies "mini ontologies". Since mini ontologies are very modular and portable they can be used in a variety of context. To illustrate the generation and use of mini ontologies we show how enumerations which may part of an existing data model, like SPASE *Region enumerations, can be modeled as a mini ontology. We show how such ontologies can be transformed to generate metadata expressions which can be readily used in different operational context, for example in the tag of a web page. We define a set of context specific transforms for commonly used metadata expressions which can preserve the semantic information in a mini ontology and describe how such expressions are reversible. The sharing and adoption of mini ontologies can significantly enhance the discovery and use of related data resources within a community. We look at several cases where this is true with a special focus on the international ESPAS project.

  4. A microprocessor-based table lookup approach for magnetic bearing linearization

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Miller, J. B.

    1981-01-01

    An approach for producing a linear transfer characteristic between force command and force output of a magnetic bearing actuator without flux biasing is presented. The approach is microprocessor based and uses a table lookup to generate drive signals for the magnetic bearing power driver. An experimental test setup used to demonstrate the feasibility of the approach is described, and test results are presented. The test setup contains bearing elements similar to those used in a laboratory model annular momentum control device.

  5. Updated H2SO4-H2O binary homogeneous nucleation look-up tables

    NASA Astrophysics Data System (ADS)

    Yu, Fangqun

    2008-12-01

    The calculated rates of H2SO4-H2O binary homogeneous nucleation (BHN), which is the only nucleation mechanism currently widely used in global aerosol models, are well known to have large uncertainties. Recently, we have reduced the uncertainties in the BHN rates on the basis of a kinetic quasi-unary nucleation (KQUN) model, by taking into account the measured bonding energetics of H2SO4 monomers with hydrated sulfuric acid dimers and trimers. The uncertainties were further reduced by using two independent measurements to constrain the equilibrium constants for monomer hydration. In this paper, we present updated BHN rate look-up tables derived from the improved KQUN model which can be used by anyone to obtain the BHN rates under given conditions. The look-up tables cover a wide range of key parameters that can be found in the atmosphere and laboratory studies, and their usage significantly reduces the computational costs of the BHN rate calculations, which is critical for multidimensional modeling. The look-up tables can also be used by those involved in experiments and field measurements to quickly assess the likeliness of BHN. For quick application, one can obtain the BHN rates and properties of critical clusters by browsing through the tables. A comparison of results based on the look-up tables with those from widely used classical BHN model indicates that, in addition to several orders of magnitude difference in nucleation rates, there also exists substantial difference in the predicted numbers of sulfuric acid molecules in the critical clusters and their dependence on key parameters.

  6. Colour displays and look-up tables: real time modification of digital images.

    PubMed

    Lutz, R W; Pun, T; Pellegrini, C

    1991-01-01

    Image processing in biomedical research has become customary, along with use of colour displays to run image processing packages. The performance of softwares is highly dependent on the device they run on: architecture of colour display, depth of frame buffer, existence of look-up table, etc. Knowledge of such basic features is therefore becoming very important, especially because results can differ from device to device. This introductory paper discusses hardware features and software applications. A general architecture of colour displays is exposed, comparing the features of the most commonly used devices. Basic organisation of memory, electron gun and screen are analysed for each type of display, concluding with a more detailed study of raster scan devices. Frame buffer and look-up table organisation are then analysed in relation with overhead expenses such as time and memory. Relation between image data and displayed images is discussed. By means of examples, the manipulation of colour tables is examined in detail, showing how to improve display of images without altering image data. Finally, the basic operations performed by the look-up table editor developed at University of Geneva are presented.

  7. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    PubMed

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  8. Efficient generation of 3D hologram for American Sign Language using look-up table

    NASA Astrophysics Data System (ADS)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  9. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  10. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  11. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  12. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  13. An Ontology for Software Engineering Education

    ERIC Educational Resources Information Center

    Ling, Thong Chee; Jusoh, Yusmadi Yah; Adbullah, Rusli; Alwi, Nor Hayati

    2013-01-01

    Software agents communicate using ontology. It is important to build an ontology for specific domain such as Software Engineering Education. Building an ontology from scratch is not only hard, but also incur much time and cost. This study aims to propose an ontology through adaptation of the existing ontology which is originally built based on a…

  14. The ontology of biological taxa

    PubMed Central

    Schulz, Stefan; Stenzhorn, Holger; Boeker, Martin

    2008-01-01

    Motivation: The classification of biological entities in terms of species and taxa is an important endeavor in biology. Although a large amount of statements encoded in current biomedical ontologies is taxon-dependent there is no obvious or standard way for introducing taxon information into an integrative ontology architecture, supposedly because of ongoing controversies about the ontological nature of species and taxa. Results: In this article, we discuss different approaches on how to represent biological taxa using existing standards for biomedical ontologies such as the description logic OWL DL and the Open Biomedical Ontologies Relation Ontology. We demonstrate how hidden ambiguities of the species concept can be dealt with and existing controversies can be overcome. A novel approach is to envisage taxon information as qualities that inhere in biological organisms, organism parts and populations. Availability: The presented methodology has been implemented in the domain top-level ontology BioTop, openly accessible at http://purl.org/biotop. BioTop may help to improve the logical and ontological rigor of biomedical ontologies and further provides a clear architectural principle to deal with biological taxa information. Contact: stschulz@uni-freiburg.de PMID:18586729

  15. Ontology through a Mindfulness Process

    ERIC Educational Resources Information Center

    Bearance, Deborah; Holmes, Kimberley

    2015-01-01

    Traditionally, when ontology is taught in a graduate studies course on social research, there is a tendency for this concept to be examined through the process of lectures and readings. Such an approach often leaves graduate students to grapple with a personal embodiment of this concept and to comprehend how ontology can ground their research.…

  16. An Ontological Approach to Education.

    ERIC Educational Resources Information Center

    Hyde, Bruce

    Distinguishing the contextual realm addressed by ontological education as a valid area for inquiry by those who think about language and communication, this paper discusses an approach to education that is ontological in nature, in that its focus is the "being" of human beings rather than their knowledge. The paper explores several ideas…

  17. Ontology through a Mindfulness Process

    ERIC Educational Resources Information Center

    Bearance, Deborah; Holmes, Kimberley

    2015-01-01

    Traditionally, when ontology is taught in a graduate studies course on social research, there is a tendency for this concept to be examined through the process of lectures and readings. Such an approach often leaves graduate students to grapple with a personal embodiment of this concept and to comprehend how ontology can ground their research.…

  18. Ontological turns, turnoffs and roundabouts.

    PubMed

    Sismondo, Sergio

    2015-06-01

    There has been much talk of an 'ontological turn' in Science and Technology Studies. This commentary explores some recent work on multiple and historical ontologies, especially articles published in this journal, against a background of constructivism. It can be tempting to read an ontological turn as based and promoting a version of perspectivism, but that is inadequate to the scholarly work and opens multiple ontologies to serious criticisms. Instead, we should read our ontological turn or turns as being about multiplicities of practices and the ways in which these practices shape the material world. Ontologies arise out of practices through which people engage with things; the practices are fundamental and the ontologies derivative. The purchase in this move comes from the elucidating power of the verbs that scholars use to analyze relations of practices and objects--which turn out to be specific cases of constructivist verbs. The difference between this ontological turn and constructivist work in Science and Technology Studies appears to be a matter of emphases found useful for different purposes.

  19. Building Ontologies in DAML + OIL

    PubMed Central

    Wroe, Chris; Bechhofer, Sean; Lord, Phillip; Rector, Alan; Goble, Carole

    2003-01-01

    In this article we describe an approach to representing and building ontologies advocated by the Bioinformatics and Medical Informatics groups at the University of Manchester. The hand-crafting of ontologies offers an easy and rapid avenue to delivering ontologies. Experience has shown that such approaches are unsustainable. Description logic approaches have been shown to offer computational support for building sound, complete and logically consistent ontologies. A new knowledge representation language, DAML + OIL, offers a new standard that is able to support many styles of ontology, from hand-crafted to full logic-based descriptions with reasoning support. We describe this language, the OilEd editing tool, reasoning support and a strategy for the language’s use. We finish with a current example, in the Gene Ontology Next Generation (GONG) project, that uses DAML + OIL as the basis for moving the Gene Ontology from its current hand-crafted, form to one that uses logical descriptions of a concept’s properties to deliver a more complete version of the ontology. PMID:18629114

  20. GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.

    2015-12-01

    The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like

  1. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  2. Keyword Ontology Development for Discovering Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Piasecki, Michael; Hooper, Rick; Choi, Yoori

    2010-05-01

    Service (USGS) National Water Information System (NWIS) and the Environmental Protection Agency's STORET data system . In order to avoid overwhelming returns when searching for more general concepts, the ontology's upper layers (called navigation layers) cannot be used to search for data, which in turn prompts the need to identify general groupings of data such as Biological, or Chemical, or Physical data groups, which then must be further subdivided in a cascading fashion all the way to the leaf levels. This classification is not straightforward however and poses much potential for discussion. Finally, it is important to identify on the dimensionality of the ontology, i.e. does the keyword contain only the property measured (e.g., "temperature") or the medium and the property ("air temperature").

  3. An improved lookup protocol model for peer-to-peer networks

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Ye, Dongfen

    2011-12-01

    With the development of the peer-to-peer (P2P) technology, file sharing is becoming the hottest, fastest growing application on the Internet. Although we can benefit from different protocols separately, our research shows that if there exists a proper model, most of the seemingly different protocols can be classified to a same framework. In this paper, we propose an improved Chord arithmetic based on the binary tree for P2P networks. We perform extensive simulations to study our proposed protocol. The results show that the improved Chord reduces the average lookup path length without increasing the joining and departing complexity.

  4. A VLSI architecture for performing finite field arithmetic with reduced table look-up

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Reed, I. S.

    1986-01-01

    A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.

  5. Spatial frequency sampling look-up table method for computer-generated hologram

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Huang, Yingqing; Jiang, Xiaoyu; Yan, Xingpeng

    2016-04-01

    A spatial frequency sampling look-up table method is proposed to generate a hologram. The three-dimensional (3-D) scene is sampled as several intensity images by computer rendering. Each object point on the rendered images has a defined spatial frequency. The basis terms for calculating fringe patterns are precomputed and stored in a table to improve the calculation speed. Both numerical simulations and optical experiments are performed. The results show that the proposed approach can easily realize color reconstructions of a 3-D scene with a low computation cost. The occlusion effects and depth information are all provided accurately.

  6. A VLSI architecture for performing finite field arithmetic with reduced table look-up

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Reed, I. S.

    1986-01-01

    A new table look-up method for finding the log and antilog of finite field elements has been developed by N. Glover. In his method, the log and antilog of a field element is found by the use of several smaller tables. The method is based on a use of the Chinese Remainder Theorem. The technique often results in a significant reduction in the memory requirements of the problem. A VLSI architecture is developed for a special case of this new algorithm to perform finite field arithmetic including multiplication, division, and the finding of an inverse element in the finite field.

  7. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability.

    PubMed

    Diehl, Alexander D; Meehan, Terrence F; Bradford, Yvonne M; Brush, Matthew H; Dahdul, Wasila M; Dougall, David S; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E; Vasilevsky, Nicole A; Haendel, Melissa A; Blake, Judith A; Mungall, Christopher J

    2016-07-04

    The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the

  8. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability

    SciTech Connect

    Diehl, Alexander D.; Meehan, Terrence F.; Bradford, Yvonne M.; Brush, Matthew H.; Dahdul, Wasila M.; Dougall, David S.; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E.; Vasilevsky, Nicole A.; Haendel, Melissa A.; Blake, Judith A.; Mungall, Christopher J.

    2016-07-04

    Background: The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Construction and content: Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. Utility and discussion: The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. Conclusions: The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider

  9. Ontology-Based Approach to Social Data Sentiment Analysis: Detection of Adolescent Depression Signals

    PubMed Central

    Jung, Hyesil; Song, Tae-Min

    2017-01-01

    Background Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. Objective The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. Methods The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. Results We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the

  10. Ontology-Based Approach to Social Data Sentiment Analysis: Detection of Adolescent Depression Signals.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2017-07-24

    Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91

  11. Gene Ontology Consortium: going forward.

    PubMed

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Gene Ontology Consortium: going forward

    PubMed Central

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369

  13. Cosmology and Ontology

    NASA Astrophysics Data System (ADS)

    Grujic, P. V.

    2008-10-01

    Cosmos poses unique problems to its investigations, both from the epistemological and ontological aspects. We analyze modern cosmology as science of the totality of the material reality, with emphasis on the physical content of the principal entities involved in describing the Universe as we perceive. In particular we examine the concept of creation and anihilation and argue that these notions, if relevant, are devoid of meaningful content. If applicable, the notion of evolution refers to transition from physical field entities towards inert matter components. We discuss the meaning of the existentional quantificator and show that the cosmology is essentially a historical science. Finally, we consider an interplay between the epistemological and phenomenological aspects, arguing that in cosmology it is the former one may rely on.

  14. On the look-up tables for the critical heat flux in tubes (history and problems)

    SciTech Connect

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality. The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.

  15. Lookup-table method for imaging optical properties with structured illumination beyond the diffusion theory regime

    PubMed Central

    Erickson, Tim A.; Mazhar, Amaan; Cuccia, David; Durkin, Anthony J.; Tunnell, James W.

    2010-01-01

    Sinusoidally structured illumination is used in concert with a phantom-based lookup-table (LUT) to map wide-field optical properties in turbid media with reduced albedos as low as 0.44. A key advantage of the lookup-table approach is the ability to measure the absorption (μa) and reduced scattering coefficients (μs′) over a much broader range of values than permitted by current diffusion theory methods. Through calibration with a single reflectance standard, the LUT can extract μs′ from 0.8 to 2.4 mm−1 with an average root-mean-square (rms) error of 7% and extract μa from 0 to 1.0 mm−1 with an average rms error of 6%. The LUT is based solely on measurements of two parameters, reflectance R and modulation M at an illumination period of 10 mm. A single set of three phase-shifted images is sufficient to measure both M and R, which are then used to generate maps of absorption and scattering by referencing the LUT. We establish empirically that each pair (M,R) maps uniquely to only one pair of (μs′,μa) and report that the phase function (i.e., size) of the scatterers can influence the accuracy of optical property extraction. PMID:20615015

  16. Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

    NASA Astrophysics Data System (ADS)

    Kim, Myounghoon; Lee, Hoonjae; Yoon, Ja-Cheon; Kim, Hyeokman; Sull, Sanghoon

    2009-03-01

    A novel method using a multiple-symbol lookup table (mLUT) is proposed to fast-skip the ac coefficients (codewords) not needed to construct a dc image from MPEG-1/2 video streams, resulting in fast thumbnail generation. For MPEG-1/2 video streams, thumbnail generation schemes usually extract dc images directly in a compressed domain where a dc image is constructed using a dc coefficient and a few ac coefficients from among the discrete cosine transform (DCT) coefficients. However, it is required that all codewords for DCT coefficients should be fully decoded whether they are needed or not in generating a dc image, since the bit length of a codeword coded with variable-length coding (VLC) cannot be determined until the previous VLC codeword has been decoded. Thus, a method using a mLUT designed for fast-skipping unnecessary DCT coefficients to construct a dc image is proposed, resulting in a significantly reduced number of table lookups (LUT count) for variable-length decoding of codewords. Experimental results show that the proposed method significantly improves the performance by reducing the LUT count by 50%.

  17. A region segmentation based algorithm for building a crystal position lookup table in a scintillation detector

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Peng; Yun, Ming-Kai; Liu, Shuang-Quan; Fan, Xin; Cao, Xue-Xiang; Chai, Pei; Shan, Bao-Ci

    2015-03-01

    In a scintillation detector, scintillation crystals are typically made into a 2-dimensional modular array. The location of incident gamma-ray needs be calibrated due to spatial response nonlinearity. Generally, position histograms-the characteristic flood response of scintillation detectors-are used for position calibration. In this paper, a position calibration method based on a crystal position lookup table which maps the inaccurate location calculated by Anger logic to the exact hitting crystal position has been proposed. Firstly, the position histogram is preprocessed, such as noise reduction and image enhancement. Then the processed position histogram is segmented into disconnected regions, and crystal marking points are labeled by finding the centroids of regions. Finally, crystal boundaries are determined and the crystal position lookup table is generated. The scheme is evaluated by the whole-body positron emission tomography (PET) scanner and breast dedicated single photon emission computed tomography scanner developed by the Institute of High Energy Physics, Chinese Academy of Sciences. The results demonstrate that the algorithm is accurate, efficient, robust and applicable to any configurations of scintillation detector. Supported by National Natural Science Foundation of China (81101175) and XIE Jia-Lin Foundation of Institute of High Energy Physics (Y3546360U2)

  18. Integrated data lookup and replication scheme in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Nahrstedt, Klara

    2001-11-01

    Accessing remote data is a challenging task in mobile ad hoc networks. Two problems have to be solved: (1) how to learn about available data in the network; and (2) how to access desired data even when the original copy of the data is unreachable. In this paper, we develop an integrated data lookup and replication scheme to solve these problems. In our scheme, a group of mobile nodes collectively host a set of data to improve data accessibility for all members of the group. They exchange data availability information by broadcasting advertising (ad) messages to the group using an adaptive sending rate policy. The ad messages are used by other nodes to derive a local data lookup table, and to reduce data redundancy within a connected group. Our data replication scheme predicts group partitioning based on each node's current location and movement patterns, and replicates data to other partitions before partitioning occurs. Our simulations show that data availability information can quickly propagate throughout the network, and that the successful data access ratio of each node is significantly improved.

  19. An extended lookup table of cloud detection for MTSAT-1R

    NASA Astrophysics Data System (ADS)

    Chen, Wuhan; Zhong, Bo; Li, Weisheng; Wu, Shanlong; Yu, Shanshan

    2014-11-01

    Cloud detection is a key work for the estimation of solar radiation from remote sensing. Particularly, the detection of thin cirrus cloud and the edges of thicker cloud is critical and difficult. To obtain accurate estimates of cloud cover of MTSAT-1R image, we propose an effective cloud detection algorithm for improving the detection of thin cirrus cloud and the edges of thicker cloud. Using the brightness temperature difference (BTD) and lookup table to identify cloud-free and cloud-filled pixels is not sufficient for MTSAT-1R data on the region of China. Therefore, a new lookup table (LUT) is made by extending the original one. On the basis of the exiting method, in order to apply to the MTSAT-1R satellite data in China region, we expand the scope of the latitude and extend the applicable scope of satellite zenith angle. We change the interpolation method from linear mode to nonlinear mode. The evaluation results indicate that our proposed method is effective for the cirrus and the edges of thicker cloud detection of MTSAT-1R in China region.

  20. OBIB-a novel ontology for biobanking.

    PubMed

    Brochhausen, Mathias; Zheng, Jie; Birtwell, David; Williams, Heather; Masci, Anna Maria; Ellis, Helena Judge; Stoeckert, Christian J

    2016-01-01

    Biobanking necessitates extensive integration of data to allow data analysis and specimen sharing. Ontologies have been demonstrated to be a promising approach in fostering better semantic integration of biobank-related data. Hitherto no ontology provided the coverage needed to capture a broad spectrum of biobank user scenarios. Based in the principles laid out by the Open Biological and Biomedical Ontologies Foundry two biobanking ontologies have been developed. These two ontologies were merged using a modular approach consistent with the initial development principles. The merging was facilitated by the fact that both ontologies use the same Upper Ontology and re-use classes from a similar set of pre-existing ontologies. Based on the two previous ontologies the Ontology for Biobanking (http://purl.obolibrary.org/obo/obib.owl) was created. Due to the fact that there was no overlap between the two source ontologies the coverage of the resulting ontology is significantly larger than of the two source ontologies. The ontology is successfully used in managing biobank information of the Penn Medicine BioBank. Sharing development principles and Upper Ontologies facilitates subsequent merging of ontologies to achieve a broader coverage.

  1. Ontology Research and Development. Part 2 - A Review of Ontology Mapping and Evolving.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Reviews ontology research and development, specifically ontology mapping and evolving. Highlights include an overview of ontology mapping projects; maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired; and ontology's role and the future of the World Wide Web, or Semantic Web. (Contains 55…

  2. Developing a Modular Hydrogeology Ontology Extending the SWEET Ontologies

    NASA Astrophysics Data System (ADS)

    Tripathi, A.; Babaie, H. A.

    2005-12-01

    Reengineering upper-level ontologies to make them useful for specific domains can be achieved using modular software development techniques. The challenge of manipulating complex and general, upper-level ontologies can be overcome by using ontology development tools for the purpose of analysis and design of new concepts and extension of existing concepts. As a use case representing this approach we present the reengineering of NASA's Semantic Web for Earth and Environmental Terminology (SWEET) ontologies to include part of the hydrogeology concepts. We have maintained the modular design of the SWEET ontologies for maximum extensibility and reusability. The modular reengineering of the SWEET ontologies to include hydrogeology domain involved the following steps: (1): Identify the terms and concepts relevant to the hydrogeology domain through scenarios, competency questions, and interviews with domain experts. (2): Establish the inter-relationships between concepts (e.g., vadose zone = unsaturated zone). (3): Identify the dependent concepts, such as physical properties or units, and determine their relationships to external concepts. (4): Download the OWL files from SWEET, and save them on local systems for editing. (5): Use ontology editing tools like SWOOP and Protege to analyze the structure of the existing OWL files. (6): Add new domain concepts as new classes in the OWL files, or as subclasses of already existing classes in the SWEET ontologies. The step involved changing the relationships (properties) and/or adding new relationships where they were required in the domain. Sometimes the entire structure of the existing concepts needed to be changed to represent the domain concept more meaningfully. (7): Test the consistency of concepts using appropriate tools (e.g., Protege, which uses the Racer reasoner to check consistency of concepts). (8) Add individuals to the new concepts to test the modified ontologies. We present an example of a simple RDQL query to test

  3. The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability

    DOE PAGES

    Diehl, Alexander D.; Meehan, Terrence F.; Bradford, Yvonne M.; ...

    2016-07-04

    Background: The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Construction and content: Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologiesmore » in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. Utility and discussion: The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. Conclusions: The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the

  4. Ontology-Oriented Programming for Biomedical Informatics.

    PubMed

    Lamy, Jean-Baptiste

    2016-01-01

    Ontologies are now widely used in the biomedical domain. However, it is difficult to manipulate ontologies in a computer program and, consequently, it is not easy to integrate ontologies with databases or websites. Two main approaches have been proposed for accessing ontologies in a computer program: traditional API (Application Programming Interface) and ontology-oriented programming, either static or dynamic. In this paper, we will review these approaches and discuss their appropriateness for biomedical ontologies. We will also present an experience feedback about the integration of an ontology in a computer software during the VIIIP research project. Finally, we will present OwlReady, the solution we developed.

  5. Multifunctional crop trait ontology for breeders' data: field book, annotation, data discovery and semantic enrichment of the literature

    PubMed Central

    Shrestha, Rosemary; Arnaud, Elizabeth; Mauleon, Ramil; Senger, Martin; Davenport, Guy F.; Hancock, David; Morrison, Norman; Bruskiewich, Richard; McLaren, Graham

    2010-01-01

    Background and aims Agricultural crop databases maintained in gene banks of the Consultative Group on International Agricultural Research (CGIAR) are valuable sources of information for breeders. These databases provide comparative phenotypic and genotypic information that can help elucidate functional aspects of plant and agricultural biology. To facilitate data sharing within and between these databases and the retrieval of information, the crop ontology (CO) database was designed to provide controlled vocabulary sets for several economically important plant species. Methodology Existing public ontologies and equivalent catalogues of concepts covering the range of crop science information and descriptors for crops and crop-related traits were collected from breeders, physiologists, agronomists, and researchers in the CGIAR consortium. For each crop, relationships between terms were identified and crop-specific trait ontologies were constructed following the Open Biomedical Ontologies (OBO) format standard using the OBO-Edit tool. All terms within an ontology were assigned a globally unique CO term identifier. Principal results The CO currently comprises crop-specific traits for chickpea (Cicer arietinum), maize (Zea mays), potato (Solanum tuberosum), rice (Oryza sativa), sorghum (Sorghum spp.) and wheat (Triticum spp.). Several plant-structure and anatomy-related terms for banana (Musa spp.), wheat and maize are also included. In addition, multi-crop passport terms are included as controlled vocabularies for sharing information on germplasm. Two web-based online resources were built to make these COs available to the scientific community: the ‘CO Lookup Service’ for browsing the CO; and the ‘Crops Terminizer’, an ontology text mark-up tool. Conclusions The controlled vocabularies of the CO are being used to curate several CGIAR centres' agronomic databases. The use of ontology terms to describe agronomic phenotypes and the accurate mapping of these

  6. The ACGT Master Ontology and its applications--towards an ontology-driven cancer research and management system.

    PubMed

    Brochhausen, Mathias; Spear, Andrew D; Cocos, Cristian; Weiler, Gabriele; Martín, Luis; Anguita, Alberto; Stenzhorn, Holger; Daskalaki, Evangelia; Schera, Fatima; Schwarz, Ulf; Sfakianakis, Stelios; Kiefer, Stephan; Dörr, Martin; Graf, Norbert; Tsiknakis, Manolis

    2011-02-01

    This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer-Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infrastructure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic

  7. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    PubMed Central

    Li, Yao; Wan, Liang; Chen, Kai

    2015-01-01

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mapped automatically from Laue microdiffraction raster scans with thousands of data points. Taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system. PMID:26089764

  8. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    SciTech Connect

    Li, Yao; Wan, Liang; Chen, Kai

    2015-04-25

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mapped automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.

  9. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    DOE PAGES

    Li, Yao; Wan, Liang; Chen, Kai

    2015-04-25

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mappedmore » automatically from Laue microdiffraction raster scans with thousands of data points. Finally, taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.« less

  10. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  11. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans.

    PubMed

    Li, Yao; Wan, Liang; Chen, Kai

    2015-06-01

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle-axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin-parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mapped automatically from Laue microdiffraction raster scans with thousands of data points. Taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system.

  12. An Ontology Infrastructure for an E-Learning Scenario

    ERIC Educational Resources Information Center

    Guo, Wen-Ying; Chen, De-Ren

    2007-01-01

    Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…

  13. An Ontology Infrastructure for an E-Learning Scenario

    ERIC Educational Resources Information Center

    Guo, Wen-Ying; Chen, De-Ren

    2007-01-01

    Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…

  14. Modeling Tools and Ontology Development

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    In this chapter, we give a few tutorials on how to use some of the currently available UML tools to create ontologies using the Ontology UML Profile. We also provide a brief overview of the Atlas Transformation Language (ATL) and its tooling support. Here we will show how ATL can be used for transforming ODM, and thus continue our discussion about transformations provided in Chap. 10.

  15. BioPortal: ontologies and integrated data resources at the click of a mouse.

    PubMed

    Noy, Natalya F; Shah, Nigam H; Whetzel, Patricia L; Dai, Benjamin; Dorf, Michael; Griffith, Nicholas; Jonquet, Clement; Rubin, Daniel L; Storey, Margaret-Anne; Chute, Christopher G; Musen, Mark A

    2009-07-01

    Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protégé frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers 'one-stop shopping' to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources.

  16. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  17. How the gene ontology evolves.

    PubMed

    Leonelli, Sabina; Diehl, Alexander D; Christie, Karen R; Harris, Midori A; Lomax, Jane

    2011-08-05

    Maintaining a bio-ontology in the long term requires improving and updating its contents so that it adequately captures what is known about biological phenomena. This paper illustrates how these processes are carried out, by studying the ways in which curators at the Gene Ontology have hitherto incorporated new knowledge into their resource. Five types of circumstances are singled out as warranting changes in the ontology: (1) the emergence of anomalies within GO; (2) the extension of the scope of GO; (3) divergence in how terminology is used across user communities; (4) new discoveries that change the meaning of the terms used and their relations to each other; and (5) the extension of the range of relations used to link entities or processes described by GO terms. This study illustrates the difficulties involved in applying general standards to the development of a specific ontology. Ontology curation aims to produce a faithful representation of knowledge domains as they keep developing, which requires the translation of general guidelines into specific representations of reality and an understanding of how scientific knowledge is produced and constantly updated. In this context, it is important that trained curators with technical expertise in the scientific field(s) in question are involved in supervising ontology shifts and identifying inaccuracies.

  18. ``Force,'' ontology, and language

    NASA Astrophysics Data System (ADS)

    Brookes, David T.; Etkina, Eugenia

    2009-06-01

    We introduce a linguistic framework through which one can interpret systematically students’ understanding of and reasoning about force and motion. Some researchers have suggested that students have robust misconceptions or alternative frameworks grounded in everyday experience. Others have pointed out the inconsistency of students’ responses and presented a phenomenological explanation for what is observed, namely, knowledge in pieces. We wish to present a view that builds on and unifies aspects of this prior research. Our argument is that many students’ difficulties with force and motion are primarily due to a combination of linguistic and ontological difficulties. It is possible that students are primarily engaged in trying to define and categorize the meaning of the term “force” as spoken about by physicists. We found that this process of negotiation of meaning is remarkably similar to that engaged in by physicists in history. In this paper we will describe a study of the historical record that reveals an analogous process of meaning negotiation, spanning multiple centuries. Using methods from cognitive linguistics and systemic functional grammar, we will present an analysis of the force and motion literature, focusing on prior studies with interview data. We will then discuss the implications of our findings for physics instruction.

  19. ontologyX: a suite of R packages for working with ontological data.

    PubMed

    Greene, Daniel; Richardson, Sylvia; Turro, Ernest

    2017-01-05

    Ontologies are widely used constructs for encoding and analyzing biomedical data, but the absence of simple and consistent tools has made exploratory and systematic analysis of such data unnecessarily difficult. Here we present three packages which aim to simplify such procedures. The ontologyIndex package enables arbitrary ontologies to be read into R, supports representation of ontological objects by native R types, and provides a parsimonius set of performant functions for querying ontologies. ontologySimilarity and ontologyPlot extend ontologyIndex with functionality for straightforward visualization and semantic similarity calculations, including statistical routines.

  20. Efficient Management of Biomedical Ontology Versions

    NASA Astrophysics Data System (ADS)

    Kirsten, Toralf; Hartung, Michael; Groß, Anika; Rahm, Erhard

    Ontologies have become very popular in life sciences and other domains. They mostly undergo continuous changes and new ontology versions are frequently released. However, current analysis studies do not consider the ontology changes reflected in different versions but typically limit themselves to a specific ontology version which may quickly become obsolete. To allow applications easy access to different ontology versions we propose a central and uniform management of the versions of different biomedical ontologies. The proposed database approach takes concept and structural changes of succeeding ontology versions into account thereby supporting different kinds of change analysis. Furthermore, it is very space-efficient by avoiding redundant storage of ontology components which remain unchanged in different versions. We evaluate the storage requirements and query performance of the proposed approach for the Gene Ontology.

  1. FYPO: the fission yeast phenotype ontology.

    PubMed

    Harris, Midori A; Lock, Antonia; Bähler, Jürg; Oliver, Stephen G; Wood, Valerie

    2013-07-01

    To provide consistent computable descriptions of phenotype data, PomBase is developing a formal ontology of phenotypes observed in fission yeast. The fission yeast phenotype ontology (FYPO) is a modular ontology that uses several existing ontologies from the open biological and biomedical ontologies (OBO) collection as building blocks, including the phenotypic quality ontology PATO, the Gene Ontology and Chemical Entities of Biological Interest. Modular ontology development facilitates partially automated effective organization of detailed phenotype descriptions with complex relationships to each other and to underlying biological phenomena. As a result, FYPO supports sophisticated querying, computational analysis and comparison between different experiments and even between species. FYPO releases are available from the Subversion repository at the PomBase SourceForge project page (https://sourceforge.net/p/pombase/code/HEAD/tree/phenotype_ontology/). The current version of FYPO is also available on the OBO Foundry Web site (http://obofoundry.org/).

  2. A Lexical-Ontological Resource for Consumer Healthcare

    NASA Astrophysics Data System (ADS)

    Cardillo, Elena; Serafini, Luciano; Tamilin, Andrei

    In Consumer Healthcare Informatics it is still difficult for laypeople to find, understand and act on health information, due to the persistent communication gap between specialized medical terminology and that used by healthcare consumers. Furthermore, existing clinically-oriented terminologies cannot provide sufficient support when integrated into consumer-oriented applications, so there is a need to create consumer-friendly terminologies reflecting the different ways healthcare consumers express and think about health topics. Following this direction, this work suggests a way to support the design of an ontology-based system that mitigates this gap, using knowledge engineering and semantic web technologies. The system is based on the development of a consumer-oriented medical terminology that will be integrated with other medical domain ontologies and terminologies into a medical ontology repository. This will support consumer-oriented healthcare systems, such as Personal Health Records, by providing many knowledge services to help users in accessing and managing their healthcare data.

  3. Ontology and rules based model for traffic query

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Du, Qingyun; Huang, Qian; Zhao, Haiyun

    2008-10-01

    This paper will combine ontology and rule based qualitative reason with real time calculation, designing a combined traffic model of national scope which contains highway, railroad, water carriage, scheduled flight etc. That method follows the sense of people to space, establishes ontologies and rules knowledge base, using concepts, instances, relations and rules of traffic field as the basic knowledge for qualitative reason to discover implicit semantic information and eliminate unnecessary ambiguities. The knowledge from the ontologies and rules provides abundant information for query which can lighten the burden of computation, in the mean time, real-time calculation guarantees the accuracy of the data, has raised accuracy and efficiency of the query, which has strengthened the ease of query service and improved web users' experience.

  4. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  5. Research on land registration procedure ontology of China

    NASA Astrophysics Data System (ADS)

    Zhao, Zhongjun; Du, Qingyun; Zhang, Weiwei; Liu, Tao

    2009-10-01

    Land registration is public act which is to record the state-owned land use right, collective land ownership, collective land use right and land mortgage, servitude, as well as other land rights required the registration according to laws and regulations onto land registering books. Land registration is one of the important government affairs , so it is very important to standardize, optimize and humanize the process of land registration. The management works of organization are realized through a variety of workflows. Process knowledge is in essence a kind of methodology knowledge and a system which including the core and the relational knowledge. In this paper, the ontology is introduced into the field of land registration and management, trying to optimize the flow of land registration, to promote the automation-building and intelligent Service of land registration affairs, to provide humanized and intelligent service for multi-types of users . This paper tries to build land registration procedure ontology by defining the land registration procedure ontology's key concepts which represent the kinds of processes of land registration and mapping the kinds of processes to OWL-S. The land registration procedure ontology shall be the start and the basis of the Web service.

  6. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.

  7. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.; Collins, Stuart A., Jr.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  8. How granularity issues concern biomedical ontology integration.

    PubMed

    Schulz, Stefan; Boeker, Martin; Stenzhorn, Holger

    2008-01-01

    The application of upper ontologies has been repeatedly advocated for supporting interoperability between domain ontologies in order to facilitate shared data use both within and across disciplines. We have developed BioTop as a top-domain ontology to integrate more specialized ontologies in the biomolecular and biomedical domain. In this paper, we report on concrete integration problems of this ontology with the domain-independent Basic Formal Ontology (BFO) concerning the issue of fiat and aggregated objects in the context of different granularity levels. We conclude that the third BFO level must be ignored in order not to obviate cross-granularity integration.

  9. Dynamic sub-ontology evolution for traditional Chinese medicine web ontology.

    PubMed

    Mao, Yuxin; Wu, Zhaohui; Tian, Wenya; Jiang, Xiaohong; Cheung, William K

    2008-10-01

    As a form of important domain knowledge, large-scale ontologies play a critical role in building a large variety of knowledge-based systems. To overcome the problem of semantic heterogeneity and encode domain knowledge in reusable format, a large-scale and well-defined ontology is also required in the traditional Chinese medicine discipline. We argue that to meet the on-demand and scalability requirement ontology-based systems should go beyond the use of static ontology and be able to self-evolve and specialize for the domain knowledge they possess. In particular, we refer to the context-specific portions from large-scale ontologies like the traditional Chinese medicine ontology as sub-ontologies. Ontology-based systems are able to reuse sub-ontologies in local repository called ontology cache. In order to improve the overall performance of ontology cache, we propose to evolve sub-ontologies in ontology cache to optimize the knowledge structure of sub-ontologies. Moreover, we present the sub-ontology evolution approach based on a genetic algorithm for reusing large-scale ontologies. We evaluate the proposed evolution approach with the traditional Chinese medicine ontology and obtain promising results.

  10. Hydrologic Ontology for the Web

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Piasecki, M.

    2003-12-01

    This poster presents the conceptual development of a Hydrologic Ontology for the Web (HOW) that will facilitate data sharing among the hydrologic community. Hydrologic data is difficult to share because of its predicted vast increase in data volume, the availability of new measurement technologies and the heterogeneity of information systems used to produced, store, retrieved and used the data. The augmented capacity of the Internet and the technologies recommended by the W3C, as well as metadata standards provide sophisticated means to make data more usable and systems to be more integrated. Standard metadata is commonly used to solve interoperability issues. For the hydrologic field an explicit metadata standard does not exist, but one could be created extending metadata standards such as the FGDC-STD-001-1998 or ISO 19115. Standard metadata defines a set of elements required to describe data in a consistent manner, and their domains are sometimes restricted by a finite set of values or controlled vocabulary (e.g. code lists in ISO/DIS 19115). This controlled vocabulary is domain specific varying from one information community to another, allowing dissimilar descriptions to similar data sets. This issue is sometimes called semantic non-interoperability or semantic heterogeneity, and it is usually the main problem when sharing data. Explicit domain ontologies could be created to provide semantic interoperability among heterogeneous information communities. Domain ontologies supply the values for restricted domains of some elements in the metadata set and the semantic mapping with other domain ontologies. To achieve interoperability between applications that exchange machine-understandable information on the Web, metadata is expressed using Resource Description Framework (RDF) and domain ontologies are expressed using the Ontology Web Language (OWL), which is also based on RDF. A specific OWL ontology for hydrology is HOW. HOW presents, using a formal syntax, the

  11. Tutorial on Protein Ontology Resources.

    PubMed

    Arighi, Cecilia N; Drabkin, Harold; Christie, Karen R; Ross, Karen E; Natale, Darren A

    2017-01-01

    The Protein Ontology (PRO) is the reference ontology for proteins in the Open Biomedical Ontologies (OBO) foundry and consists of three sub-ontologies representing protein classes of homologous genes, proteoforms (e.g., splice isoforms, sequence variants, and post-translationally modified forms), and protein complexes. PRO defines classes of proteins and protein complexes, both species-specific and species nonspecific, and indicates their relationships in a hierarchical framework, supporting accurate protein annotation at the appropriate level of granularity, analyses of protein conservation across species, and semantic reasoning. In the first section of this chapter, we describe the PRO framework including categories of PRO terms and the relationship of PRO to other ontologies and protein resources. Next, we provide a tutorial about the PRO website ( proconsortium.org ) where users can browse and search the PRO hierarchy, view reports on individual PRO terms, and visualize relationships among PRO terms in a hierarchical table view, a multiple sequence alignment view, and a Cytoscape network view. Finally, we describe several examples illustrating the unique and rich information available in PRO.

  12. Ontologies for cancer nanotechnology research.

    PubMed

    Thomas, Dennis G; Pappu, Rohit V; Baker, Nathan A

    2009-01-01

    Cancer nanotechnology research data are diverse. Ontologies that provide a unifying knowledge framework for annotation of data are necessary to facilitate the sharing and semantic integration of data for advancing the research via informatics methods. In this work, we report the development of NanoParticle Ontology (NPO) to support the terminological and informatics needs of cancer nanotechnology. The NPO is developed within the framework of the Basic Formal Ontology (BFO) using well-defined principles, and implemented in the Ontology Web Language (OWL). The NPO currently represents entities related to physical, chemical and functional descriptions of nanoparticles that are formulated and tested for applications in cancer diagnostics and therapeutics. Public releases of the NPO are available through the BioPortal web site, maintained by the National Center for Biomedical Ontology. Expansion of the scope and application of the NPO will depend on the needs of and feedback from the user community, and its adoption in nanoparticle database applications. As the NPO continues to grow, it will require a governance structure and well-organized community effort for the maintenance, review and development of the NPO.

  13. Ontology-based approach for managing personal health and wellness information.

    PubMed

    Sachinopoulou, Anna; Leppänen, Juha; Kaijanranta, Hannu; Lähteenmäki, Jaakko

    2007-01-01

    This paper describes a new approach for collecting and sharing personal health and wellness information. The approach is based on a Personal Health Record (PHR) including both clinical and non-clinical data. The PHR is located on a network server referred as Common Server. The overall service architecture for providing anonymous and private access to the PHR is described. Semantic interoperability is based on an ontology collection and usage of OID (Object Identifier) codes. The formal (upper) ontology combines a set of domain ontologies representing different aspects of personal health and wellness. The ontology collection emphasizes wellness aspects while clinical data is modelled by using OID references to existing vocabularies. Modular ontology approach enables distributed management and expansion of the data model.

  14. Complex Topographic Feature Ontology Patterns

    USGS Publications Warehouse

    Varanka, Dalia E.; Jerris, Thomas J.

    2015-01-01

    Semantic ontologies are examined as effective data models for the representation of complex topographic feature types. Complex feature types are viewed as integrated relations between basic features for a basic purpose. In the context of topographic science, such component assemblages are supported by resource systems and found on the local landscape. Ontologies are organized within six thematic modules of a domain ontology called Topography that includes within its sphere basic feature types, resource systems, and landscape types. Context is constructed not only as a spatial and temporal setting, but a setting also based on environmental processes. Types of spatial relations that exist between components include location, generative processes, and description. An example is offered in a complex feature type ‘mine.’ The identification and extraction of complex feature types are an area for future research.

  15. The cellular microscopy phenotype ontology.

    PubMed

    Jupp, Simon; Malone, James; Burdett, Tony; Heriche, Jean-Karim; Williams, Eleanor; Ellenberg, Jan; Parkinson, Helen; Rustici, Gabriella

    2016-01-01

    Phenotypic data derived from high content screening is currently annotated using free-text, thus preventing the integration of independent datasets, including those generated in different biological domains, such as cell lines, mouse and human tissues. We present the Cellular Microscopy Phenotype Ontology (CMPO), a species neutral ontology for describing phenotypic observations relating to the whole cell, cellular components, cellular processes and cell populations. CMPO is compatible with related ontology efforts, allowing for future cross-species integration of phenotypic data. CMPO was developed following a curator-driven approach where phenotype data were annotated by expert biologists following the Entity-Quality (EQ) pattern. These EQs were subsequently transformed into new CMPO terms following an established post composition process. CMPO is currently being utilized to annotate phenotypes associated with high content screening datasets stored in several image repositories including the Image Data Repository (IDR), MitoSys project database and the Cellular Phenotype Database to facilitate data browsing and discoverability.

  16. Ontology Matching with Semantic Verification

    PubMed Central

    Jean-Mary, Yves R.; Shironoshita, E. Patrick; Kabuka, Mansur R.

    2009-01-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies. PMID:20186256

  17. An ontology for sensor networks

    NASA Astrophysics Data System (ADS)

    Compton, Michael; Neuhaus, Holger; Bermudez, Luis; Cox, Simon

    2010-05-01

    Sensors and networks of sensors are important ways of monitoring and digitizing reality. As the number and size of sensor networks grows, so too does the amount of data collected. Users of such networks typically need to discover the sensors and data that fit their needs without necessarily understanding the complexities of the network itself. The burden on users is eased if the network and its data are expressed in terms of concepts familiar to the users and their job functions, rather than in terms of the network or how it was designed. Furthermore, the task of collecting and combining data from multiple sensor networks is made easier if metadata about the data and the networks is stored in a format and conceptual models that is amenable to machine reasoning and inference. While the OGC's (Open Geospatial Consortium) SWE (Sensor Web Enablement) standards provide for the description and access to data and metadata for sensors, they do not provide facilities for abstraction, categorization, and reasoning consistent with standard technologies. Once sensors and networks are described using rich semantics (that is, by using logic to describe the sensors, the domain of interest, and the measurements) then reasoning and classification can be used to analyse and categorise data, relate measurements with similar information content, and manage, query and task sensors. This will enable types of automated processing and logical assurance built on OGC standards. The W3C SSN-XG (Semantic Sensor Networks Incubator Group) is producing a generic ontology to describe sensors, their environment and the measurements they make. The ontology provides definitions for the structure of sensors and observations, leaving the details of the observed domain unspecified. This allows abstract representations of real world entities, which are not observed directly but through their observable qualities. Domain semantics, units of measurement, time and time series, and location and mobility

  18. CLO: The cell line ontology

    PubMed Central

    2014-01-01

    Background Cell lines have been widely used in biomedical research. The community-based Cell Line Ontology (CLO) is a member of the OBO Foundry library that covers the domain of cell lines. Since its publication two years ago, significant updates have been made, including new groups joining the CLO consortium, new cell line cells, upper level alignment with the Cell Ontology (CL) and the Ontology for Biomedical Investigation, and logical extensions. Construction and content Collaboration among the CLO, CL, and OBI has established consensus definitions of cell line-specific terms such as ‘cell line’, ‘cell line cell’, ‘cell line culturing’, and ‘mortal’ vs. ‘immortal cell line cell’. A cell line is a genetically stable cultured cell population that contains individual cell line cells. The hierarchical structure of the CLO is built based on the hierarchy of the in vivo cell types defined in CL and tissue types (from which cell line cells are derived) defined in the UBERON cross-species anatomy ontology. The new hierarchical structure makes it easier to browse, query, and perform automated classification. We have recently added classes representing more than 2,000 cell line cells from the RIKEN BRC Cell Bank to CLO. Overall, the CLO now contains ~38,000 classes of specific cell line cells derived from over 200 in vivo cell types from various organisms. Utility and discussion The CLO has been applied to different biomedical research studies. Example case studies include annotation and analysis of EBI ArrayExpress data, bioassays, and host-vaccine/pathogen interaction. CLO’s utility goes beyond a catalogue of cell line types. The alignment of the CLO with related ontologies combined with the use of ontological reasoners will support sophisticated inferencing to advance translational informatics development. PMID:25852852

  19. Markov Chain Ontology Analysis (MCOA)

    PubMed Central

    2012-01-01

    Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches

  20. Design of schistosomiasis ontology (IDOSCHISTO) extending the infectious disease ontology.

    PubMed

    Camara, Gaoussou; Despres, Sylvie; Djedidi, Rim; Lo, Moussa

    2013-01-01

    Epidemiological monitoring of the schistosomiasis' spreading brings together many practitioners working at different levels of granularity (biology, host individual, host population), who have different perspectives (biology, clinic and epidemiology) on the same phenomenon. Biological perspective deals with pathogens (e.g. life cycle) or physiopathology while clinical perspective deals with hosts (e.g. healthy or infected host, diagnosis, treatment, etc.). In an epidemiological perspective corresponding to the host population level of granularity, the schistosomiasis disease is characterized according to the way (causes, risk factors, etc.) it spreads in this population over space and time. In this paper we provide an ontological analysis and design for the Schistosomiasis domain knowledge and spreading dynamics. IDOSCHISTO - the schistosomiasis ontology - is designed as an extension of the Infectious Disease Ontology (IDO). This ontology aims at supporting the schistosomiasis monitoring process during a spreading crisis by enabling data integration, semantic interoperability, for collaborative work on one hand and for risk analysis and decision making on the other hand.

  1. Versioning System for Distributed Ontology Development

    DTIC Science & Technology

    2016-03-15

    Distributed Ontology Development S.K. Damodaran 15 March 2016 This material is based on work supported by the Assistant Secretary of Defense for...Distributed Ontology Development S.K. Damodaran Formerly Group 59 15 March 2016 Massachusetts Institute of Technology Lincoln Laboratory...intentionally left blank. iii EXECUTIVE SUMMARY Common Cyber Environment Representation (CCER) is an ontology for describing operationally relevant, and

  2. A Gene Ontology Tutorial in Python.

    PubMed

    Vesztrocy, Alex Warwick; Dessimoz, Christophe

    2017-01-01

    This chapter is a tutorial on using Gene Ontology resources in the Python programming language. This entails querying the Gene Ontology graph, retrieving Gene Ontology annotations, performing gene enrichment analyses, and computing basic semantic similarity between GO terms. An interactive version of the tutorial, including solutions, is available at http://gohandbook.org .

  3. Controlled Vocabularies, Mini Ontologies and Interoperability (Invited)

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Roberts, D.; Thieman, J.; Ritschel, B.; Cecconi, B.; Genot, V. N.

    2013-12-01

    Interoperability has been an elusive goal, but in recent years advances have been made using controlled vocabularies, mini-ontologies and a lot of collaboration. This has led to increased interoperability between disciplines in the U.S. and between international projects. We discuss the successful pattern followed by SPASE, IVOA and IPDA to achieve this new level of international interoperability. A key aspect of the pattern is open standards and open participation with interoperability achieved with shared services, public APIs, standard formats and open access to data. Many of these standards are expressed as controlled vocabularies and mini ontologies. To illustrate the pattern we look at SPASE related efforts and participation of North America's Heliophysics Data Environment and CDPP; Europe's Cluster Active Archive, IMPEx, EuroPlanet, ESPAS and HELIO; and Japan's magnetospheric missions. Each participating project has its own life cycle and successful standards development must always take this into account. A major challenge for sustained collaboration and interoperability is the limited lifespan of many of the participating projects. Innovative approaches and new tools and frameworks are often developed as competitively selected, limited term projects, but for sustainable interoperability successful approaches need to become part of a long term infrastructure. This is being encouraged and achieved in many domains and we are entering a golden age of interoperability.

  4. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  5. Track-Level-Compensation Look-Up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, W.; Baher, F.; Gama, E.

    2006-01-01

    This article presents the improvement of the beam-waveguide antenna pointing accuracy due to the implementation of the track-level-compensation look-up table. It presents the development of the table, from the measurements of the inclinometer tilts to the processing of the measurement data and the determination of the threeaxis alidade rotations. The table consists of three axis rotations of the alidade as a function of the azimuth position. The article also presents the equations to determine the elevation and cross-elevation errors of the antenna as a function of the alidade rotations and the antenna azimuth and elevation positions. The table performance was verified using radio beam pointing data. The pointing error decreased from 4.5 mdeg to 1.4 mdeg in elevation and from 14.5 mdeg to 3.1 mdeg in cross-elevation. I. Introduction The Deep Space Station 25 (DSS 25) antenna shown in Fig. 1 is one of NASA s Deep Space Network beam-waveguide (BWG) antennas. At 34 GHz (Ka-band) operation, it is necessary to be able to track with a pointing accuracy of 2-mdeg root-mean-square (rms). Repeatable pointing errors of several millidegrees of magnitude have been observed during the BWG antenna calibration measurements. The systematic errors of order 4 and lower are eliminated using the antenna pointing model. However, repeatable pointing errors of higher order are out of reach of the model. The most prominent high-order systematic errors are the ones caused by the uneven azimuth track. The track is shown in Fig. 2. Manufacturing and installation tolerances, as well as gaps between the segments of the track, are the sources of the pointing errors that reach over 14-mdeg peak-to-peak magnitude, as reported in [1,2]. This article presents a continuation of the investigations and measurements of the pointing errors caused by the azimuth-track-level unevenness that were presented in [1] and [2], and it presents the implementation results. Track-level-compensation (TLC) look-up

  6. Emotion Education without Ontological Commitment?

    ERIC Educational Resources Information Center

    Kristjansson, Kristjan

    2010-01-01

    Emotion education is enjoying new-found popularity. This paper explores the "cosy consensus" that seems to have developed in education circles, according to which approaches to emotion education are immune from metaethical considerations such as contrasting rationalist and sentimentalist views about the moral ontology of emotions. I spell out five…

  7. An ontology for microbial phenotypes.

    PubMed

    Chibucos, Marcus C; Zweifel, Adrienne E; Herrera, Jonathan C; Meza, William; Eslamfam, Shabnam; Uetz, Peter; Siegele, Deborah A; Hu, James C; Giglio, Michelle G

    2014-11-30

    Phenotypic data are routinely used to elucidate gene function in organisms amenable to genetic manipulation. However, previous to this work, there was no generalizable system in place for the structured storage and retrieval of phenotypic information for bacteria. The Ontology of Microbial Phenotypes (OMP) has been created to standardize the capture of such phenotypic information from microbes. OMP has been built on the foundations of the Basic Formal Ontology and the Phenotype and Trait Ontology. Terms have logical definitions that can facilitate computational searching of phenotypes and their associated genes. OMP can be accessed via a wiki page as well as downloaded from SourceForge. Initial annotations with OMP are being made for Escherichia coli using a wiki-based annotation capture system. New OMP terms are being concurrently developed as annotation proceeds. We anticipate that diverse groups studying microbial genetics and associated phenotypes will employ OMP for standardizing microbial phenotype annotation, much as the Gene Ontology has standardized gene product annotation. The resulting OMP resource and associated annotations will facilitate prediction of phenotypes for unknown genes and result in new experimental characterization of phenotypes and functions.

  8. The SWAN biomedical discourse ontology.

    PubMed

    Ciccarese, Paolo; Wu, Elizabeth; Wong, Gwen; Ocana, Marco; Kinoshita, June; Ruttenberg, Alan; Clark, Tim

    2008-10-01

    Developing cures for highly complex diseases, such as neurodegenerative disorders, requires extensive interdisciplinary collaboration and exchange of biomedical information in context. Our ability to exchange such information across sub-specialties today is limited by the current scientific knowledge ecosystem's inability to properly contextualize and integrate data and discourse in machine-interpretable form. This inherently limits the productivity of research and the progress toward cures for devastating diseases such as Alzheimer's and Parkinson's. SWAN (Semantic Web Applications in Neuromedicine) is an interdisciplinary project to develop a practical, common, semantically structured, framework for biomedical discourse initially applied, but not limited, to significant problems in Alzheimer Disease (AD) research. The SWAN ontology has been developed in the context of building a series of applications for biomedical researchers, as well as in extensive discussions and collaborations with the larger bio-ontologies community. In this paper, we present and discuss the SWAN ontology of biomedical discourse. We ground its development theoretically, present its design approach, explain its main classes and their application, and show its relationship to other ongoing activities in biomedicine and bio-ontologies.

  9. Gene Ontology Annotations and Resources

    PubMed Central

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new ‘phylogenetic annotation’ process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

  10. Gene Ontology annotations and resources.

    PubMed

    Blake, J A; Dolan, M; Drabkin, H; Hill, D P; Li, Ni; Sitnikov, D; Bridges, S; Burgess, S; Buza, T; McCarthy, F; Peddinti, D; Pillai, L; Carbon, S; Dietze, H; Ireland, A; Lewis, S E; Mungall, C J; Gaudet, P; Chrisholm, R L; Fey, P; Kibbe, W A; Basu, S; Siegele, D A; McIntosh, B K; Renfro, D P; Zweifel, A E; Hu, J C; Brown, N H; Tweedie, S; Alam-Faruque, Y; Apweiler, R; Auchinchloss, A; Axelsen, K; Bely, B; Blatter, M -C; Bonilla, C; Bouguerleret, L; Boutet, E; Breuza, L; Bridge, A; Chan, W M; Chavali, G; Coudert, E; Dimmer, E; Estreicher, A; Famiglietti, L; Feuermann, M; Gos, A; Gruaz-Gumowski, N; Hieta, R; Hinz, C; Hulo, C; Huntley, R; James, J; Jungo, F; Keller, G; Laiho, K; Legge, D; Lemercier, P; Lieberherr, D; Magrane, M; Martin, M J; Masson, P; Mutowo-Muellenet, P; O'Donovan, C; Pedruzzi, I; Pichler, K; Poggioli, D; Porras Millán, P; Poux, S; Rivoire, C; Roechert, B; Sawford, T; Schneider, M; Stutz, A; Sundaram, S; Tognolli, M; Xenarios, I; Foulgar, R; Lomax, J; Roncaglia, P; Khodiyar, V K; Lovering, R C; Talmud, P J; Chibucos, M; Giglio, M Gwinn; Chang, H -Y; Hunter, S; McAnulla, C; Mitchell, A; Sangrador, A; Stephan, R; Harris, M A; Oliver, S G; Rutherford, K; Wood, V; Bahler, J; Lock, A; Kersey, P J; McDowall, D M; Staines, D M; Dwinell, M; Shimoyama, M; Laulederkind, S; Hayman, T; Wang, S -J; Petri, V; Lowry, T; D'Eustachio, P; Matthews, L; Balakrishnan, R; Binkley, G; Cherry, J M; Costanzo, M C; Dwight, S S; Engel, S R; Fisk, D G; Hitz, B C; Hong, E L; Karra, K; Miyasato, S R; Nash, R S; Park, J; Skrzypek, M S; Weng, S; Wong, E D; Berardini, T Z; Huala, E; Mi, H; Thomas, P D; Chan, J; Kishore, R; Sternberg, P; Van Auken, K; Howe, D; Westerfield, M

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new 'phylogenetic annotation' process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources.

  11. Gradient Learning Algorithms for Ontology Computing

    PubMed Central

    Gao, Wei; Zhu, Linli

    2014-01-01

    The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752

  12. Ontology for Vector Surveillance and Management

    PubMed Central

    LOZANO-FUENTES, SAUL; BANDYOPADHYAY, ARITRA; COWELL, LINDSAY G.; GOLDFAIN, ALBERT; EISEN, LARS

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an “umbrella” for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a “term tree” to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage

  13. Ontology for vector surveillance and management.

    PubMed

    Lozano-Fuentes, Saul; Bandyopadhyay, Aritra; Cowell, Lindsay G; Goldfain, Albert; Eisen, Lars

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an "umbrella" for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a "term tree" to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through

  14. One-eighth look-up table method for effectively generating computer-generated hologram patterns

    NASA Astrophysics Data System (ADS)

    Cho, Sungjin; Ju, Byeong-Kwon; Kim, Nam-Young; Park, Min-Chul

    2014-05-01

    To generate ideal digital holograms, a computer-generated hologram (CGH) has been regarded as a solution. However, it has an unavoidable problem in that the computational burden for generating CGH is very large. Recently, many studies have been conducted to investigate different solutions in order to reduce the computational complexity of CGH by using particular methods such as look-up tables (LUTs) and parallel processing. Each method has a positive effectiveness about reducing computational time for generating CGH. However, it appears to be difficult to apply both methods simultaneously because of heavy memory consumption of the LUT technique. Therefore, we proposed a one-eighth LUT method where the memory usage of the LUT is reduced, making it possible to simultaneously apply both of the fast computing methods for the computation of CGH. With the one-eighth LUT method, only one-eighth of the zone plates were stored in the LUT. All of the zone plates were accessed by indexing method. Through this method, we significantly reduced memory usage of LUT. Also, we confirmed the feasibility of reducing the computational time of the CGH by using general-purpose graphic processing units while reducing the memory usage.

  15. Lookup Tables for Predicting CHF and Film-Boiling Heat Transfer: Past, Present, and Future

    SciTech Connect

    Groeneveld, D.C.; Leung, L.K. H.; Guo, Y.; Vasic, A.; El Nakla, M.; Peng, S.W.; Yang, J.; Cheng, S.C.

    2005-10-15

    Lookup tables (LUTs) have been used widely for the prediction of critical heat flux (CHF) and film-boiling heat transfer for water-cooled tubes. LUTs are basically normalized data banks. They eliminate the need to choose between the many different CHF and film-boiling heat transfer prediction methods available.The LUTs have many advantages; e.g., (a) they are simple to use, (b) there is no iteration required, (c) they have a wide range of applications, (d) they may be applied to nonaqueous fluids using fluid-to-fluid modeling relationships, and (e) they are based on a very large database. Concerns associated with the use of LUTs include (a) there are fluctuations in the value of the CHF or film-boiling heat transfer coefficient (HTC) with pressure, mass flux, and quality, (b) there are large variations in the CHF or the film-boiling HTC between the adjacent table entries, and (c) there is a lack or scarcity of data at certain flow conditions.Work on the LUTs is continuing. This will resolve the aforementioned concerns and improve the LUT prediction capability. This work concentrates on better smoothing of the LUT entries, increasing the database, and improving models at conditions where data are sparse or absent.

  16. Modeling cosmic ray proton induced terrestrial neutron flux: A look-up table

    NASA Astrophysics Data System (ADS)

    Overholt, Andrew C.; Melott, Adrian L.; Atri, Dimitra

    2013-06-01

    contribute a significant radiation dose at commercial passenger airplane altitudes. With cosmic ray energies > 1 GeV, these effects could, in principle, be propagated to ground level. Under current conditions, the cosmic ray spectrum incident on the Earth is dominated by particles with energies < 1 GeV. Astrophysical shocks from events such as supernovae accelerate high-energy cosmic rays (HECRs) well above this range. The Earth is likely episodically exposed to a greatly increased HECR flux from such events. Solar events of smaller energies are much more common and short lived but still remain a topic of interest due to the ground level enhancements they produce. The air showers produced by cosmic rays (CRs) ionize the atmosphere and produce harmful secondary particles such as muons and neutrons. Although the secondary spectra from current day terrestrial cosmic ray flux are well known, this is not true for spectra produced by many astrophysical events. This work shows the results of Monte Carlo simulations quantifying the neutron flux due to CRs at various primary energies and altitudes. We provide here look-up tables that can be used to determine neutron fluxes from proton primaries with kinetic energies of 1 MeV-1 PeV. By convolution, one can compute the neutron flux for any arbitrary CR spectrum. This contrasts with all other similar works, which are spectrum dependent. Our results demonstrate the difficulty in deducing the nature of primaries from the spectrum of ground level neutron enhancements.

  17. Track Level Compensation Look-up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Baher, Farrokh; Gama, Eric

    2006-01-01

    The pointing accuracy of the NASA Deep Space Network antennas is significantly impacted by the unevenness of the antenna azimuth track. The track unevenness causes repeatable antenna rotations, and repeatable pointing errors. The paper presents the improvement of the pointing accuracy of the antennas by implementing the track-level-compensation look-up table. The table consists of three axis rotations of the alidade as a function of the azimuth position. The paper presents the development of the table, based on the measurements of the inclinometer tilts, processing the measurement data, and determination of the three-axis alidade rotations from the tilt data. It also presents the determination of the elevation and cross-elevation errors of the antenna as a function of the alidade rotations. The pointing accuracy of the antenna with and without a table was measured using various radio beam pointing techniques. The pointing error decreased when the table was used, from 1.5 mdeg to 1.2 mdeg in elevation, and from 20.4 mdeg to 2.2 mdeg in cross-elevation.

  18. Track Level Compensation Look-up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Baher, Farrokh; Gama, Eric

    2006-01-01

    The pointing accuracy of the NASA Deep Space Network antennas is significantly impacted by the unevenness of the antenna azimuth track. The track unevenness causes repeatable antenna rotations, and repeatable pointing errors. The paper presents the improvement of the pointing accuracy of the antennas by implementing the track-level-compensation look-up table. The table consists of three axis rotations of the alidade as a function of the azimuth position. The paper presents the development of the table, based on the measurements of the inclinometer tilts, processing the measurement data, and determination of the three-axis alidade rotations from the tilt data. It also presents the determination of the elevation and cross-elevation errors of the antenna as a function of the alidade rotations. The pointing accuracy of the antenna with and without a table was measured using various radio beam pointing techniques. The pointing error decreased when the table was used, from 1.5 mdeg to 1.2 mdeg in elevation, and from 20.4 mdeg to 2.2 mdeg in cross-elevation.

  19. Efficient Lookup Table-Based Adaptive Baseband Predistortion Architecture for Memoryless Nonlinearity

    NASA Astrophysics Data System (ADS)

    Ba, Seydou N.; Waheed, Khurram; Zhou, G. Tong

    2010-12-01

    Digital predistortion is an effective means to compensate for the nonlinear effects of a memoryless system. In case of a cellular transmitter, a digital baseband predistorter can mitigate the undesirable nonlinear effects along the signal chain, particularly the nonlinear impairments in the radiofrequency (RF) amplifiers. To be practically feasible, the implementation complexity of the predistorter must be minimized so that it becomes a cost-effective solution for the resource-limited wireless handset. This paper proposes optimizations that facilitate the design of a low-cost high-performance adaptive digital baseband predistorter for memoryless systems. A comparative performance analysis of the amplitude and power lookup table (LUT) indexing schemes is presented. An optimized low-complexity amplitude approximation and its hardware synthesis results are also studied. An efficient LUT predistorter training algorithm that combines the fast convergence speed of the normalized least mean squares (NLMSs) with a small hardware footprint is proposed. Results of fixed-point simulations based on the measured nonlinear characteristics of an RF amplifier are presented.

  20. Ontologies as integrative tools for plant science

    PubMed Central

    Walls, Ramona L.; Athreya, Balaji; Cooper, Laurel; Elser, Justin; Gandolfo, Maria A.; Jaiswal, Pankaj; Mungall, Christopher J.; Preece, Justin; Rensing, Stefan; Smith, Barry; Stevenson, Dennis W.

    2012-01-01

    Premise of the study Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the semantic web. Methods This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results Ontologies can advance plant science in four keys areas: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Conclusions Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. PMID:22847540

  1. Ontologies as integrative tools for plant science.

    PubMed

    Walls, Ramona L; Athreya, Balaji; Cooper, Laurel; Elser, Justin; Gandolfo, Maria A; Jaiswal, Pankaj; Mungall, Christopher J; Preece, Justin; Rensing, Stefan; Smith, Barry; Stevenson, Dennis W

    2012-08-01

    Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the semantic web. This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Ontologies can advance plant science in four keys areas: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies.

  2. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  3. Impact of ontology evolution on functional analyses.

    PubMed

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  4. Efficient table lookup without inverse square roots for calculation of pair wise atomic interactions in classical simulations.

    PubMed

    Nilsson, Lennart

    2009-07-15

    A major bottleneck in classical atomistic simulations of biomolecular systems is the calculation of the pair wise nonbonded (Coulomb, van der Waals) interactions. This remains an issue even when methods are used (e.g., lattice summation or spherical cutoffs) in which the number of interactions is reduced from O(N(2)) to O(NlogN) or O(N). The interaction forces and energies can either be calculated directly each time they are needed or retrieved using precomputed values in a lookup table; the choice between direct calculation and table lookup methods depends on the characteristics of the system studied (total number of particles and the number of particle kinds) as well as the hardware used (CPU speed, size and speed of cache, and main memory). A recently developed lookup table code, implemented in portable and easily maintained FORTRAN 95 in the CHARMM program (www.charmm.org), achieves a 1.5- to 2-fold speedup compared with standard calculations using highly optimized FORTRAN code in real molecular dynamics simulations for a wide range of molecular system sizes. No approximations other than the finite resolution of the tables are introduced, and linear interpolation in a table with the relatively modest density of 100 points/A(2) yields the same accuracy as the standard double precision calculations. For proteins in explicit water a less dense table (10 points/A(2)) is 10-20% faster than using the larger table, and only slightly less accurate. The lookup table is even faster than hand coded assembler routines in most cases, mainly due to a significantly smaller operation count inside the inner loop. (c) 2008 Wiley Periodicals, Inc.

  5. Alignment of ICNP® 2.0 ontology and a proposed INCP® Brazilian ontology.

    PubMed

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; Nóbrega, Maria Miriam Lima da

    2014-01-01

    to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology. document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence, their names, and their sibling, parent, and child classes. there were 2,682 concepts present in the ICNP® 2.0 Ontology that were missing in the Brazilian Ontology; 717 concepts present in the Brazilian Ontology were missing in the ICNP® 2.0 Ontology; and there were 215 pairs of matching concepts. it is believed that the correspondences identified in this study might contribute to the interoperability between the representations of nursing practice elements in ICNP®, thus allowing the standardization of nursing records based on this classification system.

  6. Performance of a lookup table-based approach for measuring tissue optical properties with diffuse optical spectroscopy.

    PubMed

    Nichols, Brandon S; Rajaram, Narasimhan; Tunnell, James W

    2012-05-01

    Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.

  7. Ontological realism: A methodology for coordinated evolution of scientific ontologies

    PubMed Central

    Smith, Barry; Ceusters, Werner

    2011-01-01

    Since 2002 we have been testing and refining a methodology for ontology development that is now being used by multiple groups of researchers in different life science domains. Gary Merrill, in a recent paper in this journal, describes some of the reasons why this methodology has been found attractive by researchers in the biological and biomedical sciences. At the same time he assails the methodology on philosophical grounds, focusing specifically on our recommendation that ontologies developed for scientific purposes should be constructed in such a way that their terms are seen as referring to what we call universals or types in reality. As we show, Merrill’s critique is of little relevance to the success of our realist project, since it not only reveals no actual errors in our work but also criticizes views on universals that we do not in fact hold. However, it nonetheless provides us with a valuable opportunity to clarify the realist methodology, and to show how some of its principles are being applied, especially within the framework of the OBO (Open Biomedical Ontologies) Foundry initiative. PMID:21637730

  8. The Orthology Ontology: development and applications.

    PubMed

    Fernández-Breis, Jesualdo Tomás; Chiba, Hirokazu; Legaz-García, María Del Carmen; Uchiyama, Ikuo

    2016-06-04

    Computational comparative analysis of multiple genomes provides valuable opportunities to biomedical research. In particular, orthology analysis can play a central role in comparative genomics; it guides establishing evolutionary relations among genes of organisms and allows functional inference of gene products. However, the wide variations in current orthology databases necessitate the research toward the shareability of the content that is generated by different tools and stored in different structures. Exchanging the content with other research communities requires making the meaning of the content explicit. The need for a common ontology has led to the creation of the Orthology Ontology (ORTH) following the best practices in ontology construction. Here, we describe our model and major entities of the ontology that is implemented in the Web Ontology Language (OWL), followed by the assessment of the quality of the ontology and the application of the ORTH to existing orthology datasets. This shareable ontology enables the possibility to develop Linked Orthology Datasets and a meta-predictor of orthology through standardization for the representation of orthology databases. The ORTH is freely available in OWL format to all users at http://purl.org/net/orth . The Orthology Ontology can serve as a framework for the semantic standardization of orthology content and it will contribute to a better exploitation of orthology resources in biomedical research. The results demonstrate the feasibility of developing shareable datasets using this ontology. Further applications will maximize the usefulness of this ontology.

  9. Evaluation of research in biomedical ontologies.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Gkoutos, Georgios V

    2013-11-01

    Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research.

  10. Evaluation of research in biomedical ontologies

    PubMed Central

    Dumontier, Michel; Gkoutos, Georgios V.

    2013-01-01

    Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research. PMID:22962340

  11. Revealing ontological commitments by magic.

    PubMed

    Griffiths, Thomas L

    2015-03-01

    Considering the appeal of different magical transformations exposes some systematic asymmetries. For example, it is more interesting to transform a vase into a rose than a rose into a vase. An experiment in which people judged how interesting they found different magic tricks showed that these asymmetries reflect the direction a transformation moves in an ontological hierarchy: transformations in the direction of animacy and intelligence are favored over the opposite. A second and third experiment demonstrated that judgments of the plausibility of machines that perform the same transformations do not show the same asymmetries, but judgments of the interestingness of such machines do. A formal argument relates this sense of interestingness to evidence for an alternative to our current physical theory, with magic tricks being a particularly pure source of such evidence. These results suggest that people's intuitions about magic tricks can reveal the ontological commitments that underlie human cognition.

  12. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  13. Representing COA with Probabilistic Ontologies

    DTIC Science & Technology

    2011-06-01

    in utility measures, may be combined with probabilities (RUSSEL; NORVIG , 2002). Ontologies have been proposed as a tool to better express a domain...all non mentioned literals are unknown (RUSSEL; NORVIG , 2002) and must be described in the context nodes. Thus, all available information should be...Information Systems: Meeting the Challenge of the Knowledge Era. : Greenwood Publishing Group, 1996. 183 p. RUSSEL, S.; NORVIG , P. Artificial

  14. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  15. Global Aerosol Optical Models and Lookup Tables for the New MODIS Aerosol Retrieval over Land

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Remer, Loraine A.; Dubovik, Oleg

    2007-01-01

    Since 2000, MODIS has been deriving aerosol properties over land from MODIS observed spectral reflectance, by matching the observed reflectance with that simulated for selected aerosol optical models, aerosol loadings, wavelengths and geometrical conditions (that are contained in a lookup table or 'LUT'). Validation exercises have showed that MODIS tends to under-predict aerosol optical depth (tau) in cases of large tau (tau greater than 1.0), signaling errors in the assumed aerosol optical properties. Using the climatology of almucantur retrievals from the hundreds of global AERONET sunphotometer sites, we found that three spherical-derived models (describing fine-sized dominated aerosol), and one spheroid-derived model (describing coarse-sized dominated aerosol, presumably dust) generally described the range of observed global aerosol properties. The fine dominated models were separated mainly by their single scattering albedo (omega(sub 0)), ranging from non-absorbing aerosol (omega(sub 0) approx. 0.95) in developed urban/industrial regions, to neutrally absorbing aerosol (omega(sub 0) approx.90) in forest fire burning and developing industrial regions, to absorbing aerosol (omega(sub 0) approx. 0.85) in regions of savanna/grassland burning. We determined the dominant model type in each region and season, to create a 1 deg. x 1 deg. grid of assumed aerosol type. We used vector radiative transfer code to create a new LUT, simulating the four aerosol models, in four MODIS channels. Independent AERONET observations of spectral tau agree with the new models, indicating that the new models are suitable for use by the MODIS aerosol retrieval.

  16. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities

    PubMed Central

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  17. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    PubMed

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  18. The ChEBI reference database and ontology for biologically relevant chemistry: enhancements for 2013.

    PubMed

    Hastings, Janna; de Matos, Paula; Dekker, Adriano; Ennis, Marcus; Harsha, Bhavana; Kale, Namrata; Muthukrishnan, Venkatesh; Owen, Gareth; Turner, Steve; Williams, Mark; Steinbeck, Christoph

    2013-01-01

    ChEBI (http://www.ebi.ac.uk/chebi) is a database and ontology of chemical entities of biological interest. Over the past few years, ChEBI has continued to grow steadily in content, and has added several new features. In addition to incorporating all user-requested compounds, our annotation efforts have emphasized immunology, natural products and metabolites in many species. All database entries are now 'is_a' classified within the ontology, meaning that all of the chemicals are available to semantic reasoning tools that harness the classification hierarchy. We have completely aligned the ontology with the Open Biomedical Ontologies (OBO) Foundry-recommended upper level Basic Formal Ontology. Furthermore, we have aligned our chemical classification with the classification of chemical-involving processes in the Gene Ontology (GO), and as a result of this effort, the majority of chemical-involving processes in GO are now defined in terms of the ChEBI entities that participate in them. This effort necessitated incorporating many additional biologically relevant compounds. We have incorporated additional data types including reference citations, and the species and component for metabolites. Finally, our website and web services have had several enhancements, most notably the provision of a dynamic new interactive graph-based ontology visualization.

  19. The ChEBI reference database and ontology for biologically relevant chemistry: enhancements for 2013

    PubMed Central

    Hastings, Janna; de Matos, Paula; Dekker, Adriano; Ennis, Marcus; Harsha, Bhavana; Kale, Namrata; Muthukrishnan, Venkatesh; Owen, Gareth; Turner, Steve; Williams, Mark; Steinbeck, Christoph

    2013-01-01

    ChEBI (http://www.ebi.ac.uk/chebi) is a database and ontology of chemical entities of biological interest. Over the past few years, ChEBI has continued to grow steadily in content, and has added several new features. In addition to incorporating all user-requested compounds, our annotation efforts have emphasized immunology, natural products and metabolites in many species. All database entries are now ‘is_a’ classified within the ontology, meaning that all of the chemicals are available to semantic reasoning tools that harness the classification hierarchy. We have completely aligned the ontology with the Open Biomedical Ontologies (OBO) Foundry-recommended upper level Basic Formal Ontology. Furthermore, we have aligned our chemical classification with the classification of chemical-involving processes in the Gene Ontology (GO), and as a result of this effort, the majority of chemical-involving processes in GO are now defined in terms of the ChEBI entities that participate in them. This effort necessitated incorporating many additional biologically relevant compounds. We have incorporated additional data types including reference citations, and the species and component for metabolites. Finally, our website and web services have had several enhancements, most notably the provision of a dynamic new interactive graph-based ontology visualization. PMID:23180789

  20. Discovery, Integration, and Analysis (DIA) Engine for Ontologically Registered Earth Science Data

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Rezgui, A.; Dalton, A.; Lin, K.

    2006-12-01

    A newly developed DIA engine within the NSF supported GEON program utilizes an ontologic cyberinfrastructure framework for discovery, integration, and analysis of earth science data. Data discovery, is commonly challenging because of the use of personalized acronyms, notations, conventions, etc., but can be simplified through ontologic registration. Data integration enables users to extract new information, called data products, by jointly considering and correlating several ontologically registered data sets. We have developed ontology packages as well as accessed ontologies such as SWEET, which provide concepts, concept taxonomies, relationships between concepts, and properties, as an initial step towards the development of complete heavyweight ontologies (with axioms and constraints) for earth science. The primary objective is to allow researchers to associate ontology to their data, so that a unique and definite meaning is associated with each data item. This facilitates data discovery and integration by relating data items with similar semantics across various repositories. The DIA engine provides a Web accessible graphical user interface (GUI) comprising of map services and query menus. Users can specify a "geological region of interest" by making selections on geologic maps which are part of the GUI. Moreover, interactive menus enable filtering, discovery and integration of data (geospatial as well as aspatial), using many tools, including those developed by the community. We support the Web services technology to share these tools since web services hide the tool implementation details and only provide the required invocation details (input/output parameters, etc.). Thus, geoscientists can build tools that access ontologically registered data and provide invocation details publicly. Therefore, any tool that is developed as a Web service can be plugged in the DIA engine. The DIA engine supports dynamic data product creation which requires "on

  1. Improvements to cardiovascular gene ontology.

    PubMed

    Lovering, Ruth C; Dimmer, Emily C; Talmud, Philippa J

    2009-07-01

    Gene Ontology (GO) provides a controlled vocabulary to describe the attributes of genes and gene products in any organism. Although one might initially wonder what relevance a 'controlled vocabulary' might have for cardiovascular science, such a resource is proving highly useful for researchers investigating complex cardiovascular disease phenotypes as well as those interpreting results from high-throughput methodologies. GO enables the current functional knowledge of individual genes to be used to annotate genomic or proteomic datasets. In this way, the GO data provides a very effective way of linking biological knowledge with the analysis of the large datasets of post-genomics research. Consequently, users of high-throughput methodologies such as expression arrays or proteomics will be the main beneficiaries of such annotation sets. However, as GO annotations increase in quality and quantity, groups using small-scale approaches will gradually begin to benefit too. For example, genome wide association scans for coronary heart disease are identifying novel genes, with previously unknown connections to cardiovascular processes, and the comprehensive annotation of these novel genes might provide clues to their cardiovascular link. At least 4000 genes, to date, have been implicated in cardiovascular processes and an initiative is underway to focus on annotating these genes for the benefit of the cardiovascular community. In this article we review the current uses of Gene Ontology annotation to highlight why Gene Ontology should be of interest to all those involved in cardiovascular research.

  2. The RNA structure alignment ontology

    PubMed Central

    Brown, James W.; Birmingham, Amanda; Griffiths, Paul E.; Jossinet, Fabrice; Kachouri-Lafond, Rym; Knight, Rob; Lang, B. Franz; Leontis, Neocles; Steger, Gerhard; Stombaugh, Jesse; Westhof, Eric

    2009-01-01

    Multiple sequence alignments are powerful tools for understanding the structures, functions, and evolutionary histories of linear biological macromolecules (DNA, RNA, and proteins), and for finding homologs in sequence databases. We address several ontological issues related to RNA sequence alignments that are informed by structure. Multiple sequence alignments are usually shown as two-dimensional (2D) matrices, with rows representing individual sequences, and columns identifying nucleotides from different sequences that correspond structurally, functionally, and/or evolutionarily. However, the requirement that sequences and structures correspond nucleotide-by-nucleotide is unrealistic and hinders representation of important biological relationships. High-throughput sequencing efforts are also rapidly making 2D alignments unmanageable because of vertical and horizontal expansion as more sequences are added. Solving the shortcomings of traditional RNA sequence alignments requires explicit annotation of the meaning of each relationship within the alignment. We introduce the notion of “correspondence,” which is an equivalence relation between RNA elements in sets of sequences as the basis of an RNA alignment ontology. The purpose of this ontology is twofold: first, to enable the development of new representations of RNA data and of software tools that resolve the expansion problems with current RNA sequence alignments, and second, to facilitate the integration of sequence data with secondary and three-dimensional structural information, as well as other experimental information, to create simultaneously more accurate and more exploitable RNA alignments. PMID:19622678

  3. A framework for lipoprotein ontology.

    PubMed

    Chen, Meifania; Hadzic, Maja

    2011-01-01

    Clinical and epidemiological studies have established a significant correlation between abnormal plasma lipoprotein levels and cardiovascular disease, which remains the leading cause of mortality in the world today. In addition, lipoprotein dysregulation, known as dyslipidemia, is a central feature in disease states, such as diabetes and hypertension, which increases the risk of cardiovascular disease. While a corpus of literature exists on different areas of lipoprotein research, one of the major challenges that researchers face is the difficulties in accessing and integrating relevant information amidst massive quantities of heterogeneous data. Semantic web technologies, specifically ontologies, target these problems by providing an organizational framework of the concepts involved in a system of related instances to support systematic querying of information. In this paper, we identify issues within the lipoprotein research domain and present a preliminary framework for Lipoprotein Ontology, which consists of five specific areas of lipoprotein research: Classification, Metabolism, Pathophysiology, Etiology, and Treatment. By integrating specific aspects of lipoprotein research, Lipoprotein Ontology will provide the basis for the design of various applications to enable interoperability between research groups or software agents, as well as the development of tools for the diagnosis and treatment of dyslipidemia.

  4. The Gene Ontology (GO) Cellular Component Ontology: integration with SAO (Subcellular Anatomy Ontology) and other recent developments

    PubMed Central

    2013-01-01

    Background The Gene Ontology (GO) (http://www.geneontology.org/) contains a set of terms for describing the activity and actions of gene products across all kingdoms of life. Each of these activities is executed in a location within a cell or in the vicinity of a cell. In order to capture this context, the GO includes a sub-ontology called the Cellular Component (CC) ontology (GO-CCO). The primary use of this ontology is for GO annotation, but it has also been used for phenotype annotation, and for the annotation of images. Another ontology with similar scope to the GO-CCO is the Subcellular Anatomy Ontology (SAO), part of the Neuroscience Information Framework Standard (NIFSTD) suite of ontologies. The SAO also covers cell components, but in the domain of neuroscience. Description Recently, the GO-CCO was enriched in content and links to the Biological Process and Molecular Function branches of GO as well as to other ontologies. This was achieved in several ways. We carried out an amalgamation of SAO terms with GO-CCO ones; as a result, nearly 100 new neuroscience-related terms were added to the GO. The GO-CCO also contains relationships to GO Biological Process and Molecular Function terms, as well as connecting to external ontologies such as the Cell Ontology (CL). Terms representing protein complexes in the Protein Ontology (PRO) reference GO-CCO terms for their species-generic counterparts. GO-CCO terms can also be used to search a variety of databases. Conclusions In this publication we provide an overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information. One of the most recent developments of the GO-CCO was the merging in of the SAO, resulting in a single unified ontology designed to serve the needs of GO annotators as well as the specific needs of the neuroscience community. PMID:24093723

  5. The Gene Ontology (GO) Cellular Component Ontology: integration with SAO (Subcellular Anatomy Ontology) and other recent developments.

    PubMed

    Roncaglia, Paola; Martone, Maryann E; Hill, David P; Berardini, Tanya Z; Foulger, Rebecca E; Imam, Fahim T; Drabkin, Harold; Mungall, Christopher J; Lomax, Jane

    2013-10-07

    The Gene Ontology (GO) (http://www.geneontology.org/) contains a set of terms for describing the activity and actions of gene products across all kingdoms of life. Each of these activities is executed in a location within a cell or in the vicinity of a cell. In order to capture this context, the GO includes a sub-ontology called the Cellular Component (CC) ontology (GO-CCO). The primary use of this ontology is for GO annotation, but it has also been used for phenotype annotation, and for the annotation of images. Another ontology with similar scope to the GO-CCO is the Subcellular Anatomy Ontology (SAO), part of the Neuroscience Information Framework Standard (NIFSTD) suite of ontologies. The SAO also covers cell components, but in the domain of neuroscience. Recently, the GO-CCO was enriched in content and links to the Biological Process and Molecular Function branches of GO as well as to other ontologies. This was achieved in several ways. We carried out an amalgamation of SAO terms with GO-CCO ones; as a result, nearly 100 new neuroscience-related terms were added to the GO. The GO-CCO also contains relationships to GO Biological Process and Molecular Function terms, as well as connecting to external ontologies such as the Cell Ontology (CL). Terms representing protein complexes in the Protein Ontology (PRO) reference GO-CCO terms for their species-generic counterparts. GO-CCO terms can also be used to search a variety of databases. In this publication we provide an overview of the GO-CCO, its overall design, and some recent extensions that make use of additional spatial information. One of the most recent developments of the GO-CCO was the merging in of the SAO, resulting in a single unified ontology designed to serve the needs of GO annotators as well as the specific needs of the neuroscience community.

  6. C2 Domain Ontology within Our Lifetime

    DTIC Science & Technology

    2009-06-01

    process or an event). [21] Figure 1, adopted from [16], depicts the concept of ontological levels for a post office application based on the Husserl ...University, NJ, 2008. [23] Husserl , E., Id Macmillan. 1931. [24] Basic Formal Ontology (multiple references and artifacts): http://www.ifomis.org/bfo/BFO...Applied Ontology An Introduction, pp 39-56, Transaction Books, Rutgers University, NJ, 2008.  [23] Husserl , E., Ideas: General Introduction to Pure

  7. Instance testing of the family history ontology.

    PubMed

    Peace, Jane; Brennan, Patricia Flatley; Brennan, Patti

    2008-11-06

    The Family History Ontology formalizes nursing conceptualization about family and family history. Traditional methods of instance testing were applied to evaluate the completeness of the ontology and demonstrated favorable domain coverage. Testing also revealed a need for a new category of instance test results, "by inference", for data that can be represented through the use of inference rules associated with the ontology rather than requiring direct manual entry.

  8. Course of Action Ontology for Counterinsurgency Operations

    DTIC Science & Technology

    2010-06-01

    Knowledge-Based Systems, Inc. 2 Course of Action Ontology for Counterinsurgency Operations Timothy P. Darr 1 , Ph.D., Perakath Benjamin , Ph.D., and...Working Paper, June 2009. [Darr2009] Darr , T. P., Benjamin , P. and Mayer, R., "Course of Action Planning Ontology", Ontology for the Intelligence...Timothy Darr , Perakath Benjamin , Richard Mayer Knowledge Based Systems, Inc. This work was supported by the Office of Naval Research under Contract N00014

  9. A Marketplace for Ontologies and Ontology-Based Tools and Applications in the Life Sciences

    SciTech Connect

    McEntire, R; Goble, C; Stevens, R; Neumann, E; Matuszek, P; Critchlow, T; Tarczy-Hornoch, P

    2005-06-30

    This paper describes a strategy for the development of ontologies in the life sciences, tools to support the creation and use of those ontologies, and a framework whereby these ontologies can support the development of commercial applications within the field. At the core of these efforts is the need for an organization that will provide a focus for ontology work that will engage researchers as well as drive forward the commercial aspects of this effort.

  10. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    PubMed

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  11. Ontological concept extraction based on image understanding and describing of remote sensing domain

    NASA Astrophysics Data System (ADS)

    Zhong, Liang; Ma, Hongchao; Liu, Pengfei

    2007-11-01

    When using ontological theory to set up remote sensing image knowledge system, the majority of scholars by now regard ontology as logical theory for defining the object, attribution relation, affair and process of remote sending knowledge system. But understanding and describing the real world makes that logical theory be unable to unify concepts with the same practical meaning from different concept models, so bring grid service system drawbacks in knowledge delivering and sharing. To solve that issue requires further improvement of the defining method and model for concepts in ontology. This paper presents a neural network remote sensing image ontological concept extraction model based on image understanding and describing, utilize the theory of bionic optimization, and adopts the combination of artificial neural network with the rule-based knowledge Recognition System. Realize the knowledge delivering and sharing among different information systems or make the knowledge delivering and sharing between client and system possible and effective.

  12. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  13. Vaccine and Drug Ontology Studies (VDOS 2014).

    PubMed

    Tao, Cui; He, Yongqun; Arabandi, Sivaram

    2016-01-01

    The "Vaccine and Drug Ontology Studies" (VDOS) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have been critical to prevent and treat human and animal diseases. Work in both (drugs and vaccines) areas is closely related - from preclinical research and development to manufacturing, clinical trials, government approval and regulation, and post-licensure usage surveillance and monitoring. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs - extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, developing new models such as the Vaccine Ontology (VO) and Ontology of Adverse Events (OAE), vernacular medical terminologies such as the Consumer Health Vocabulary (CHV). The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The five full-length papers included in this 2014 thematic issue focus on two main themes: (i) General vaccine/drug-related ontology development and exploration, and (ii) Interaction and network-related ontology studies.

  14. Predicting the extension of biomedical ontologies.

    PubMed

    Pesquita, Catia; Couto, Francisco M

    2012-01-01

    Developing and extending a biomedical ontology is a very demanding task that can never be considered complete given our ever-evolving understanding of the life sciences. Extension in particular can benefit from the automation of some of its steps, thus releasing experts to focus on harder tasks. Here we present a strategy to support the automation of change capturing within ontology extension where the need for new concepts or relations is identified. Our strategy is based on predicting areas of an ontology that will undergo extension in a future version by applying supervised learning over features of previous ontology versions. We used the Gene Ontology as our test bed and obtained encouraging results with average f-measure reaching 0.79 for a subset of biological process terms. Our strategy was also able to outperform state of the art change capturing methods. In addition we have identified several issues concerning prediction of ontology evolution, and have delineated a general framework for ontology extension prediction. Our strategy can be applied to any biomedical ontology with versioning, to help focus either manual or semi-automated extension methods on areas of the ontology that need extension.

  15. Creating a magnetic resonance imaging ontology

    PubMed Central

    Lasbleiz, Jérémy; Saint-Jalmes, Hervé; Duvauferrier, Régis; Burgun, Anita

    2011-01-01

    The goal of this work is to build an ontology of Magnetic Resonance Imaging. The MRI domain has been analysed regarding MRI simulators and the DICOM standard. Tow MRI simulators have been analysed: JEMRIS, which is developed in XML and C++, has a hierarchical organisation and SIMRI, which is developed in C, has a good representation of MRI physical processes. To build the ontology we have used Protégé 4, owl2 that allows quantitative representations. The ontology has been validated by a reasoner (Fact++) and by a good representation of DICOM headers and of MRI processes. The MRI ontology would improved MRI simulators and eased semantic interoperability. PMID:21893854

  16. Agile development of ontologies through conversation

    NASA Astrophysics Data System (ADS)

    Braines, Dave; Bhattal, Amardeep; Preece, Alun D.; de Mel, Geeth

    2016-05-01

    Ontologies and semantic systems are necessarily complex but offer great potential in terms of their ability to fuse information from multiple sources in support of situation awareness. Current approaches do not place the ontologies directly into the hands of the end user in the field but instead hide them away behind traditional applications. We have been experimenting with human-friendly ontologies and conversational interactions to enable non-technical business users to interact with and extend these dynamically. In this paper we outline our approach via a worked example, covering: OWL ontologies, ITA Controlled English, Sensor/mission matching and conversational interactions between human and machine agents.

  17. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  18. Where to Publish and Find Ontologies? A Survey of Ontology Libraries

    PubMed Central

    d'Aquin, Mathieu; Noy, Natalya F.

    2011-01-01

    One of the key promises of the Semantic Web is its potential to enable and facilitate data interoperability. The ability of data providers and application developers to share and reuse ontologies is a critical component of this data interoperability: if different applications and data sources use the same set of well defined terms for describing their domain and data, it will be much easier for them to “talk” to one another. Ontology libraries are the systems that collect ontologies from different sources and facilitate the tasks of finding, exploring, and using these ontologies. Thus ontology libraries can serve as a link in enabling diverse users and applications to discover, evaluate, use, and publish ontologies. In this paper, we provide a survey of the growing—and surprisingly diverse—landscape of ontology libraries. We highlight how the varying scope and intended use of the libraries a ects their features, content, and potential exploitation in applications. From reviewing eleven ontology libraries, we identify a core set of questions that ontology practitioners and users should consider in choosing an ontology library for finding ontologies or publishing their own. We also discuss the research challenges that emerge from this survey, for the developers of ontology libraries to address. PMID:22408576

  19. Towards Ontology-Driven Information Systems: Guidelines to the Creation of New Methodologies to Build Ontologies

    ERIC Educational Resources Information Center

    Soares, Andrey

    2009-01-01

    This research targeted the area of Ontology-Driven Information Systems, where ontology plays a central role both at development time and at run time of Information Systems (IS). In particular, the research focused on the process of building domain ontologies for IS modeling. The motivation behind the research was the fact that researchers have…

  20. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration.

    PubMed

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-04

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects

    PubMed Central

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A.; Noy, Natalya F.

    2013-01-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994

  2. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration

    PubMed Central

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-01

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503

  3. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    ERIC Educational Resources Information Center

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  4. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects.

    PubMed

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A; Noy, Natalya F

    2013-05-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches.

  5. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    ERIC Educational Resources Information Center

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  6. Towards Ontology-Driven Information Systems: Guidelines to the Creation of New Methodologies to Build Ontologies

    ERIC Educational Resources Information Center

    Soares, Andrey

    2009-01-01

    This research targeted the area of Ontology-Driven Information Systems, where ontology plays a central role both at development time and at run time of Information Systems (IS). In particular, the research focused on the process of building domain ontologies for IS modeling. The motivation behind the research was the fact that researchers have…

  7. How to Write and Use the Ontology Requirements Specification Document

    NASA Astrophysics Data System (ADS)

    Suárez-Figueroa, Mari Carmen; Gómez-Pérez, Asunción; Villazón-Terrazas, Boris

    The goal of the ontology requirements specification activity is to state why the ontology is being built, what its intended uses are, who the end-users are, and which requirements the ontology should fulfill. The novelty of this paper lies in the systematization of the ontology requirements specification activity since the paper proposes detailed methodological guidelines for specifying ontology requirements efficiently. These guidelines will help ontology engineers to capture ontology requirements and produce the ontology requirements specification document (ORSD). The ORSD will play a key role during the ontology development process because it facilitates, among other activities, (1) the search and reuse of existing knowledge-aware resources with the aim of re-engineering them into ontologies, (2) the search and reuse of existing ontological resources (ontologies, ontology modules, ontology statements as well as ontology design patterns), and (3) the verification of the ontology along the ontology development. In parallel to the guidelines, we present the ORSD that resulted from the ontology requirements specification activity within the SEEMP project, and how this document facilitated not only the reuse of existing knowledge-aware resources but also the verification of the SEEMP ontologies. Moreover, we present some use cases in which the methodological guidelines proposed here were applied.

  8. Logical Gene Ontology Annotations (GOAL): exploring gene ontology annotations with OWL.

    PubMed

    Jupp, Simon; Stevens, Robert; Hoehndorf, Robert

    2012-04-24

    Ontologies such as the Gene Ontology (GO) and their use in annotations make cross species comparisons of genes possible, along with a wide range of other analytical activities. The bio-ontologies community, in particular the Open Biomedical Ontologies (OBO) community, have provided many other ontologies and an increasingly large volume of annotations of gene products that can be exploited in query and analysis. As many annotations with different ontologies centre upon gene products, there is a possibility to explore gene products through multiple ontological perspectives at the same time. Questions could be asked that link a gene product's function, process, cellular location, phenotype and disease. Current tools, such as AmiGO, allow exploration of genes based on their GO annotations, but not through multiple ontological perspectives. In addition, the semantics of these ontology's representations should be able to, through automated reasoning, afford richer query opportunities of the gene product annotations than is currently possible. To do this multi-perspective, richer querying of gene product annotations, we have created the Logical Gene Ontology, or GOAL ontology, in OWL that combines the Gene Ontology, Human Disease Ontology and the Mammalian Phenotype Ontology, together with classes that represent the annotations with these ontologies for mouse gene products. Each mouse gene product is represented as a class, with the appropriate relationships to the GO aspects, phenotype and disease with which it has been annotated. We then use defined classes to query these protein classes through automated reasoning, and to build a complex hierarchy of gene products. We have presented this through a Web interface that allows arbitrary queries to be constructed and the results displayed. This standard use of OWL affords a rich interaction with Gene Ontology, Human Disease Ontology and Mammalian Phenotype Ontology annotations for the mouse, to give a fine partitioning of

  9. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user

  10. Multiangle Implementation of Atmospheric Correction (MAIAC):. 1; Radiative Transfer Basis and Look-up Tables

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey

    2011-01-01

    This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced

  11. Multiangle Implementation of Atmospheric Correction (MAIAC):. 1; Radiative Transfer Basis and Look-up Tables

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey

    2011-01-01

    This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced

  12. Ontodog: a web-based ontology community view generation tool.

    PubMed

    Zheng, Jie; Xiang, Zuoshuang; Stoeckert, Christian J; He, Yongqun

    2014-05-01

    Biomedical ontologies are often very large and complex. Only a subset of the ontology may be needed for a specified application or community. For ontology end users, it is desirable to have community-based labels rather than the labels generated by ontology developers. Ontodog is a web-based system that can generate an ontology subset based on Excel input, and support generation of an ontology community view, which is defined as the whole or a subset of the source ontology with user-specified annotations including user-preferred labels. Ontodog allows users to easily generate community views with minimal ontology knowledge and no programming skills or installation required. Currently >100 ontologies including all OBO Foundry ontologies are available to generate the views based on user needs. We demonstrate the application of Ontodog for the generation of community views using the Ontology for Biomedical Investigations as the source ontology.

  13. XOA: Web-Enabled Cross-Ontological Analytics

    SciTech Connect

    Riensche, Roderick M.; Baddeley, Bob; Sanfilippo, Antonio P.; Posse, Christian; Gopalan, Banu

    2007-07-09

    The paper being submitted (as an "extended abstract" prior to conference acceptance) provides a technical description of our proof-of-concept prototype for the XOA method. Abstract: To address meaningful questions, scientists need to relate information across diverse classification schemes such as ontologies, terminologies and thesauri. These resources typically address a single knowledge domain at a time and are not cross-indexed. Information that is germane to the same object may therefore remain unlinked with consequent loss of knowledge discovery across disciplines and even sub-domains of the same discipline. We propose to address these problems by fostering semantic interoperability through the development of ontology alignment web services capable of enabling cross-scale knowledge discovery, and demonstrate a specific application of such an approach to the biomedical domain.

  14. Ontology driven health information systems architectures enable pHealth for empowered patients.

    PubMed

    Blobel, Bernd

    2011-02-01

    The paradigm shift from organization-centered to managed care and on to personal health settings increases specialization and distribution of actors and services related to the health of patients or even citizens before becoming patients. As a consequence, extended communication and cooperation is required between all principals involved in health services such as persons, organizations, devices, systems, applications, and components. Personal health (pHealth) environments range over many disciplines, where domain experts present their knowledge by using domain-specific terminologies and ontologies. Therefore, the mapping of domain ontologies is inevitable for ensuring interoperability. The paper introduces the care paradigms and the related requirements as well as an architectural approach for meeting the business objectives. Furthermore, it discusses some theoretical challenges and practical examples of ontologies, concept and knowledge representations, starting general and then focusing on security and privacy related services. The requirements and solutions for empowering the patient or the citizen before becoming a patient are especially emphasized.

  15. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.

  16. Semantic similarity between ontologies at different scales

    SciTech Connect

    Zhang, Qingpeng; Haglin, David J.

    2016-04-01

    In the past decade, existing and new knowledge and datasets has been encoded in different ontologies for semantic web and biomedical research. The size of ontologies is often very large in terms of number of concepts and relationships, which makes the analysis of ontologies and the represented knowledge graph computational and time consuming. As the ontologies of various semantic web and biomedical applications usually show explicit hierarchical structures, it is interesting to explore the trade-offs between ontological scales and preservation/precision of results when we analyze ontologies. This paper presents the first effort of examining the capability of this idea via studying the relationship between scaling biomedical ontologies at different levels and the semantic similarity values. We evaluate the semantic similarity between three Gene Ontology slims (Plant, Yeast, and Candida, among which the latter two belong to the same kingdom—Fungi) using four popular measures commonly applied to biomedical ontologies (Resnik, Lin, Jiang-Conrath, and SimRel). The results of this study demonstrate that with proper selection of scaling levels and similarity measures, we can significantly reduce the size of ontologies without losing substantial detail. In particular, the performance of Jiang-Conrath and Lin are more reliable and stable than that of the other two in this experiment, as proven by (a) consistently showing that Yeast and Candida are more similar (as compared to Plant) at different scales, and (b) small deviations of the similarity values after excluding a majority of nodes from several lower scales. This study provides a deeper understanding of the application of semantic similarity to biomedical ontologies, and shed light on how to choose appropriate semantic similarity measures for biomedical engineering.

  17. A Gross Anatomy Ontology for Hymenoptera

    PubMed Central

    Yoder, Matthew J.; Mikó, István; Seltmann, Katja C.; Bertone, Matthew A.; Deans, Andrew R.

    2010-01-01

    Hymenoptera is an extraordinarily diverse lineage, both in terms of species numbers and morphotypes, that includes sawflies, bees, wasps, and ants. These organisms serve critical roles as herbivores, predators, parasitoids, and pollinators, with several species functioning as models for agricultural, behavioral, and genomic research. The collective anatomical knowledge of these insects, however, has been described or referred to by labels derived from numerous, partially overlapping lexicons. The resulting corpus of information—millions of statements about hymenopteran phenotypes—remains inaccessible due to language discrepancies. The Hymenoptera Anatomy Ontology (HAO) was developed to surmount this challenge and to aid future communication related to hymenopteran anatomy. The HAO was built using newly developed interfaces within mx, a Web-based, open source software package, that enables collaborators to simultaneously contribute to an ontology. Over twenty people contributed to the development of this ontology by adding terms, genus differentia, references, images, relationships, and annotations. The database interface returns an Open Biomedical Ontology (OBO) formatted version of the ontology and includes mechanisms for extracting candidate data and for publishing a searchable ontology to the Web. The application tools are subject-agnostic and may be used by others initiating and developing ontologies. The present core HAO data constitute 2,111 concepts, 6,977 terms (labels for concepts), 3,152 relations, 4,361 sensus (links between terms, concepts, and references) and over 6,000 text and graphical annotations. The HAO is rooted with the Common Anatomy Reference Ontology (CARO), in order to facilitate interoperability with and future alignment to other anatomy ontologies, and is available through the OBO Foundry ontology repository and BioPortal. The HAO provides a foundation through which connections between genomic, evolutionary developmental biology

  18. A gross anatomy ontology for hymenoptera.

    PubMed

    Yoder, Matthew J; Mikó, István; Seltmann, Katja C; Bertone, Matthew A; Deans, Andrew R

    2010-12-29

    Hymenoptera is an extraordinarily diverse lineage, both in terms of species numbers and morphotypes, that includes sawflies, bees, wasps, and ants. These organisms serve critical roles as herbivores, predators, parasitoids, and pollinators, with several species functioning as models for agricultural, behavioral, and genomic research. The collective anatomical knowledge of these insects, however, has been described or referred to by labels derived from numerous, partially overlapping lexicons. The resulting corpus of information--millions of statements about hymenopteran phenotypes--remains inaccessible due to language discrepancies. The Hymenoptera Anatomy Ontology (HAO) was developed to surmount this challenge and to aid future communication related to hymenopteran anatomy. The HAO was built using newly developed interfaces within mx, a Web-based, open source software package, that enables collaborators to simultaneously contribute to an ontology. Over twenty people contributed to the development of this ontology by adding terms, genus differentia, references, images, relationships, and annotations. The database interface returns an Open Biomedical Ontology (OBO) formatted version of the ontology and includes mechanisms for extracting candidate data and for publishing a searchable ontology to the Web. The application tools are subject-agnostic and may be used by others initiating and developing ontologies. The present core HAO data constitute 2,111 concepts, 6,977 terms (labels for concepts), 3,152 relations, 4,361 sensus (links between terms, concepts, and references) and over 6,000 text and graphical annotations. The HAO is rooted with the Common Anatomy Reference Ontology (CARO), in order to facilitate interoperability with and future alignment to other anatomy ontologies, and is available through the OBO Foundry ontology repository and BioPortal. The HAO provides a foundation through which connections between genomic, evolutionary developmental biology

  19. Issues in learning an ontology from text

    PubMed Central

    Brewster, Christopher; Jupp, Simon; Luciano, Joanne; Shotton, David; Stevens, Robert D; Zhang, Ziqi

    2009-01-01

    Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the animal behaviour domain. Our objective was to see how much could be done in a simple and relatively rapid manner using a corpus of journal papers. We used a sequence of pre-existing text processing steps, and here describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a number of hierarchies. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus. Using mainly automated techniques, we were able to construct an 18055 term ontology-like structure with 73% recall of animal behaviour terms, but a precision of only 26%. We were able to clean unwanted terms from the nascent ontology using lexico-syntactic patterns that tested the validity of term inclusion within the ontology. We used the same technique to test for subsumption relationships between the remaining terms to add structure to the initially broad and shallow structure we generated. All outputs are available at . We present a systematic method for the initial steps of ontology or structured vocabulary construction for scientific domains that requires limited human effort and can make a contribution both to ontology learning and maintenance. The method is useful both for the exploration of a scientific domain and as a stepping stone towards formally rigourous ontologies. The filtering of recognised terms from a heterogeneous corpus to focus upon those that are the topic of the ontology is identified to be one of the main challenges for research in ontology learning. PMID:19426458

  20. Semantics and metaphysics in informatics: toward an ontology of tasks.

    PubMed

    Figdor, Carrie

    2011-04-01

    This article clarifies three principles that should guide the development of any cognitive ontology. First, that an adequate cognitive ontology depends essentially on an adequate task ontology; second, that the goal of developing a cognitive ontology is independent of the goal of finding neural implementations of the processes referred to in the ontology; and third, that cognitive ontologies are neutral regarding the metaphysical relationship between cognitive and neural processes. Copyright © 2011 Cognitive Science Society, Inc.

  1. Ontology-Driven Information Integration

    NASA Technical Reports Server (NTRS)

    Tissot, Florence; Menzel, Chris

    2005-01-01

    Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.

  2. Nosology, ontology and promiscuous realism.

    PubMed

    Binney, Nicholas

    2015-06-01

    Medics may consider worrying about their metaphysics and ontology to be a waste of time. I will argue here that this is not the case. Promiscuous realism is a metaphysical position which holds that multiple, equally valid, classification schemes should be applied to objects (such as patients) to capture different aspects of their complex and heterogeneous nature. As medics at the bedside may need to capture different aspects of their patients' problems, they may need to use multiple classification schemes (multiple nosologies), and thus consider adopting a different metaphysics to the one commonly in use.

  3. Security Ontology for Annotating Resources

    DTIC Science & Technology

    2005-08-31

    RSA SHA- 25 H-CCMAC -Blowfish RPM S TdpAeDES (hasNSALevel &assurance;typeS) u-wMD4 "MD5 CAST Skipjack (hasNSALevel = &assurance;type2) CRAYON ...type3) MD5tMD5 CAST Skipjack (hasNSALevel = &assurance;type2) CRAYON (hasNSALevel = &assurance;type1) 28 C.4. NRL Security Assurance Ontology...34&assurance;Type2"/> </SymmetricAlgorithm> <SymmetricAlgorithm rdf:ID=" CRAYON "> <hasNSALevel rdf:resource="&assurance;Typel"/> </SymmetricAlgori thmn

  4. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its

  5. An ontological case base engineering methodology for diabetes management.

    PubMed

    El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema

    2014-08-01

    Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.

  6. Automating Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  7. Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob; Hohimer, Ryan E.; White, Amanda M.

    2006-06-06

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  8. Developing Domain Ontologies for Course Content

    ERIC Educational Resources Information Center

    Boyce, Sinead; Pahl, Claus

    2007-01-01

    Ontologies have the potential to play an important role in instructional design and the development of course content. They can be used to represent knowledge about content, supporting instructors in creating content or learners in accessing content in a knowledge-guided way. While ontologies exist for many subject domains, their quality and…

  9. Statistical mechanics of ontology based annotations

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Brass, Andrew

    2016-01-01

    We present a statistical mechanical theory of the process of annotating an object with terms selected from an ontology. The term selection process is formulated as an ideal lattice gas model, but in a highly structured inhomogeneous field. The model enables us to explain patterns recently observed in real-world annotation data sets, in terms of the underlying graph structure of the ontology. By relating the external field strengths to the information content of each node in the ontology graph, the statistical mechanical model also allows us to propose a number of practical metrics for assessing the quality of both the ontology, and the annotations that arise from its use. Using the statistical mechanical formalism we also study an ensemble of ontologies of differing size and complexity; an analysis not readily performed using real data alone. Focusing on regular tree ontology graphs we uncover a rich set of scaling laws describing the growth in the optimal ontology size as the number of objects being annotated increases. In doing so we provide a further possible measure for assessment of ontologies.

  10. Ontology Design Patterns as Interfaces (invited)

    NASA Astrophysics Data System (ADS)

    Janowicz, K.

    2015-12-01

    In recent years ontology design patterns (ODP) have gained popularity among knowledge engineers. ODPs are modular but self-contained building blocks that are reusable and extendible. They minimize the amount of ontological commitments and thereby are easier to integrate than large monolithic ontologies. Typically, patterns are not directly used to annotate data or to model certain domain problems but are combined and extended to form data and purpose-driven local ontologies that serve the needs of specific applications or communities. By relying on a common set of patterns these local ontologies can be aligned to improve interoperability and enable federated queries without enforcing a top-down model of the domain. In previous work, we introduced ontological views as layer on top of ontology design patterns to ease the reuse, combination, and integration of patterns. While the literature distinguishes multiple types of patterns, e.g., content patterns or logical patterns, we propose to use them as interfaces here to guide the development of ontology-driven systems.

  11. Ontologies and Information Systems: A Literature Survey

    DTIC Science & Technology

    2011-06-01

    Falcon-AO (LMO + GMO ) [146], and RiMOM [317]. Meta-matching systems include APFEL [76] and eTuner [286]. There also exist frameworks that provide a set...Jian, N., Qu, Y. and Wang, Q. 2005. GMO : A graph matching for ontologies. In Proceedings of the K-CAPWorkshop on Integrating Ontologies, Banff

  12. Developing Domain Ontologies for Course Content

    ERIC Educational Resources Information Center

    Boyce, Sinead; Pahl, Claus

    2007-01-01

    Ontologies have the potential to play an important role in instructional design and the development of course content. They can be used to represent knowledge about content, supporting instructors in creating content or learners in accessing content in a knowledge-guided way. While ontologies exist for many subject domains, their quality and…

  13. Automated Agent Ontology Creation for Distributed Databases

    DTIC Science & Technology

    2004-03-01

    relationships between themselves if one exists. For example, if one agent’s ontology was ‘ NBA ’ and the second agent’s ontology was ‘College Hoops...the two agents should discover their relationship ‘ basketball ’ [28]. The authors’ agents use supervised inductive learning to learn their individual

  14. FROG - Fingerprinting Genomic Variation Ontology.

    PubMed

    Abinaya, E; Narang, Pankaj; Bhardwaj, Anshu

    2015-01-01

    Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: "FingeRprinting Ontology of Genomic variations" is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog.

  15. FROG - Fingerprinting Genomic Variation Ontology

    PubMed Central

    Bhardwaj, Anshu

    2015-01-01

    Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: “FingeRprinting Ontology of Genomic variations” is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889

  16. [Towards a structuring fibrillar ontology].

    PubMed

    Guimberteau, J-C

    2012-10-01

    Over previous decades and centuries, the difficulty encountered in the manner in which the tissue of our bodies is organised, and structured, is clearly explained by the impossibility of exploring it in detail. Since the creation of the microscope, the perception of the basic unity, which is the cell, has been essential in understanding the functioning of reproduction and of transmission, but has not been able to explain the notion of form; since the cells are not everywhere and are not distributed in an apparently balanced manner. The problems that remain are those of form and volume and also of connection. The concept of multifibrillar architecture, shaping the interfibrillar microvolumes in space, represents a solution to all these questions. The architectural structures revealed, made up of fibres, fibrils and microfibrils, from the mesoscopic to the microscopic level, provide the concept of a living form with structural rationalism that permits the association of psychochemical molecular biodynamics and quantum physics: the form can thus be described and interpreted, and a true structural ontology is elaborated from a basic functional unity, which is the microvacuole, the intra and interfibrillar volume of the fractal organisation, and the chaotic distribution. Naturally, new, less linear, less conclusive, and less specific concepts will be implied by this ontology, leading one to believe that the emergence of life takes place under submission to forces that the original form will have imposed and oriented the adaptive finality. Copyright © 2012. Published by Elsevier SAS.

  17. Representing default knowledge in biomedical ontologies: application to the integration of anatomy and phenotype ontologies

    PubMed Central

    Hoehndorf, Robert; Loebe, Frank; Kelso, Janet; Herre, Heinrich

    2007-01-01

    Background Current efforts within the biomedical ontology community focus on achieving interoperability between various biomedical ontologies that cover a range of diverse domains. Achieving this interoperability will contribute to the creation of a rich knowledge base that can be used for querying, as well as generating and testing novel hypotheses. The OBO Foundry principles, as applied to a number of biomedical ontologies, are designed to facilitate this interoperability. However, semantic extensions are required to meet the OBO Foundry interoperability goals. Inconsistencies may arise when ontologies of properties – mostly phenotype ontologies – are combined with ontologies taking a canonical view of a domain – such as many anatomical ontologies. Currently, there is no support for a correct and consistent integration of such ontologies. Results We have developed a methodology for accurately representing canonical domain ontologies within the OBO Foundry. This is achieved by adding an extension to the semantics for relationships in the biomedical ontologies that allows for treating canonical information as default. Conclusions drawn from default knowledge may be revoked when additional information becomes available. We show how this extension can be used to achieve interoperability between ontologies, and further allows for the inclusion of more knowledge within them. We apply the formalism to ontologies of mouse anatomy and mammalian phenotypes in order to demonstrate the approach. Conclusion Biomedical ontologies require a new class of relations that can be used in conjunction with default knowledge, thereby extending those currently in use. The inclusion of default knowledge is necessary in order to ensure interoperability between ontologies. PMID:17925014

  18. XML, Ontologies, and Their Clinical Applications.

    PubMed

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  19. Versioning System for Distributed Ontology Development

    DTIC Science & Technology

    2016-02-02

    iii Versioning System for Distributed Ontology Development Suresh K. Damodaran  February 02, 2016  Distribution A: Public Release   iv EXECUTIVE...SUMMARY Common Cyber Environment Representation (CCER) is an  ontology  for describing operationally relevant,  and technically representative, cyber...range event environments. Third‐party  ontology  developers as  well as in‐house  ontology  developers contributed to CCER  Ontology . Since the cyber range

  20. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  1. An ontology for Xenopus anatomy and development

    PubMed Central

    Segerdell, Erik; Bowes, Jeff B; Pollet, Nicolas; Vize, Peter D

    2008-01-01

    Background The frogs Xenopus laevis and Xenopus (Silurana) tropicalis are model systems that have produced a wealth of genetic, genomic, and developmental information. Xenbase is a model organism database that provides centralized access to this information, including gene function data from high-throughput screens and the scientific literature. A controlled, structured vocabulary for Xenopus anatomy and development is essential for organizing these data. Results We have constructed a Xenopus anatomical ontology that represents the lineage of tissues and the timing of their development. We have classified many anatomical features in a common framework that has been adopted by several model organism database communities. The ontology is available for download at the Open Biomedical Ontologies Foundry . Conclusion The Xenopus Anatomical Ontology will be used to annotate Xenopus gene expression patterns and mutant and morphant phenotypes. Its robust developmental map will enable powerful database searches and data analyses. We encourage community recommendations for updates and improvements to the ontology. PMID:18817563

  2. Applying ontological realism to medically unexplained syndromes.

    PubMed

    Doing-Harris, Kristina; Meystre, Stephane M; Samore, Matthew; Ceusters, Werner

    2013-01-01

    The past decade has witnessed an increased interest in what are called "medically unexplained syndromes" (MUS). We address the question of whether structuring the domain knowledge for MUS can be achieved by applying the principles of Ontological Realism in light of criticisms about their usefulness in areas where science has not yet led to insights univocally endorsed by the relevant communities. We analyzed whether the different perspectives held by MUS researchers can be represented without taking any particular stance and whether existing ontologies based on Ontological Realism can be further built upon. We did not find refutation of the applicability of the principles. We found the Ontology of General Medical Science and Information Artifact Ontology to provide useful frameworks for analyzing certain MUS controversies, although leaving other questions open.

  3. Ontology for Genome Comparison and Genomic Rearrangements

    PubMed Central

    Flanagan, Keith; Stevens, Robert; Pocock, Matthew; Lee, Pete

    2004-01-01

    We present an ontology for describing genomes, genome comparisons, their evolution and biological function. This ontology will support the development of novel genome comparison algorithms and aid the community in discussing genomic evolution. It provides a framework for communication about comparative genomics, and a basis upon which further automated analysis can be built. The nomenclature defined by the ontology will foster clearer communication between biologists, and also standardize terms used by data publishers in the results of analysis programs. The overriding aim of this ontology is the facilitation of consistent annotation of genomes through computational methods, rather than human annotators. To this end, the ontology includes definitions that support computer analysis and automated transfer of annotations between genomes, rather than relying upon human mediation. PMID:18629137

  4. Temporal Ontologies for Geoscience: Alignment Challenges

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.

    2014-12-01

    Time is a central concept in geoscience. Geologic histories are composed of sequences of geologic processes and events. Calibration of their timing ties a local history into a broader context, and enables correlation of events between locations. The geologic timescale is standardized in the International Chronostratigraphic Chart, which specifies interval names, and calibrations for the ages of the interval boundaries. Time is also a key concept in the world at large. A number of general purpose temporal ontologies have been developed, both stand-alone and as parts of general purpose or upper ontologies. A temporal ontology for geoscience should apply or extend a suitable general purpose temporal ontology. However, geologic time presents two challenges: Geology involves greater spans of time than in other temporal ontologies, inconsistent with the year-month-day/hour-minute-second formalization that is a basic assumption of most general purpose temporal schemes; The geologic timescale is a temporal topology. Its calibration in terms of an absolute (numeric) scale is a scientific issue in its own right supporting a significant community. In contrast, the general purpose temporal ontologies are premised on exact numeric values for temporal position, and do not allow for temporal topology as a primary structure. We have developed an ontology for the geologic timescale to account for these concerns. It uses the ISO 19108 distinctions between different types of temporal reference system, also linking to an explicit temporal topology model. Stratotypes used in the calibration process are modelled as sampling-features following the ISO 19156 Observations and Measurements model. A joint OGC-W3C harmonization project is underway, with standardization of the W3C OWL-Time ontology as one of its tasks. The insights gained from the geologic timescale ontology will assist in development of a general ontology capable of modelling a richer set of use-cases from geoscience.

  5. Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.

    PubMed

    Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J

    2014-01-01

    Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.

  6. Disease Compass- a navigation system for disease knowledge based on ontology and linked data techniques.

    PubMed

    Kozaki, Kouji; Yamagata, Yuki; Mizoguchi, Riichiro; Imai, Takeshi; Ohe, Kazuhiko

    2017-06-19

    Medical ontologies are expected to contribute to the effective use of medical information resources that store considerable amount of data. In this study, we focused on disease ontology because the complicated mechanisms of diseases are related to concepts across various medical domains. The authors developed a River Flow Model (RFM) of diseases, which captures diseases as the causal chains of abnormal states. It represents causes of diseases, disease progression, and downstream consequences of diseases, which is compliant with the intuition of medical experts. In this paper, we discuss a fact repository for causal chains of disease based on the disease ontology. It could be a valuable knowledge base for advanced medical information systems. We developed the fact repository for causal chains of diseases based on our disease ontology and abnormality ontology. This section summarizes these two ontologies. It is developed as linked data so that information scientists can access it using SPARQL queries through an Resource Description Framework (RDF) model for causal chain of diseases. We designed the RDF model as an implementation of the RFM for the fact repository based on the ontological definitions of the RFM. 1554 diseases and 7080 abnormal states in six major clinical areas, which are extracted from the disease ontology, are published as linked data (RDF) with SPARQL endpoint (accessible API). Furthermore, the authors developed Disease Compass, a navigation system for disease knowledge. Disease Compass can browse the causal chains of a disease and obtain related information, including abnormal states, through two web services that provide general information from linked data, such as DBpedia, and 3D anatomical images. Disease Compass can provide a complete picture of disease-associated processes in such a way that fits with a clinician's understanding of diseases. Therefore, it supports user exploration of disease knowledge with access to pertinent information

  7. The MMI Device Ontology: Enabling Sensor Integration

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    The Marine Metadata Interoperability (MMI) project has developed an ontology for devices to describe sensors and sensor networks. This ontology is implemented in the W3C Web Ontology Language (OWL) and provides an extensible conceptual model and controlled vocabularies for describing heterogeneous instrument types, with different data characteristics, and their attributes. It can help users populate metadata records for sensors; associate devices with their platforms, deployments, measurement capabilities and restrictions; aid in discovery of sensor data, both historic and real-time; and improve the interoperability of observational oceanographic data sets. We developed the MMI Device Ontology following a community-based approach. By building on and integrating other models and ontologies from related disciplines, we sought to facilitate semantic interoperability while avoiding duplication. Key concepts and insights from various communities, including the Open Geospatial Consortium (eg., SensorML and Observations and Measurements specifications), Semantic Web for Earth and Environmental Terminology (SWEET), and W3C Semantic Sensor Network Incubator Group, have significantly enriched the development of the ontology. Individuals ranging from instrument designers, science data producers and consumers to ontology specialists and other technologists contributed to the work. Applications of the MMI Device Ontology are underway for several community use cases. These include vessel-mounted multibeam mapping sonars for the Rolling Deck to Repository (R2R) program and description of diverse instruments on deepwater Ocean Reference Stations for the OceanSITES program. These trials involve creation of records completely describing instruments, either by individual instances or by manufacturer and model. Individual terms in the MMI Device Ontology can be referenced with their corresponding Uniform Resource Identifiers (URIs) in sensor-related metadata specifications (e

  8. Ontologies and tag-statistics

    NASA Astrophysics Data System (ADS)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2012-05-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  9. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  10. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    PubMed

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance.

  11. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    PubMed Central

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  12. Reasoning Based Quality Assurance of Medical Ontologies: A Case Study

    PubMed Central

    Horridge, Matthew; Parsia, Bijan; Noy, Natalya F.; Musenm, Mark A.

    2014-01-01

    The World Health Organisation is using OWL as a key technology to develop ICD-11 – the next version of the well-known International Classification of Diseases. Besides providing better opportunities for data integration and linkages to other well-known ontologies such as SNOMED-CT, one of the main promises of using OWL is that it will enable various forms of automated error checking. In this paper we investigate how automated OWL reasoning, along with a Justification Finding Service can be used as a Quality Assurance technique for the development of large and complex ontologies such as ICD-11. Using the International Classification of Traditional Medicine (ICTM) – Chapter 24 of ICD-11 – as a case study, and an expert panel of knowledge engineers, we reveal the kinds of problems that can occur, how they can be detected, and how they can be fixed. Specifically, we found that a logically inconsistent version of the ICTM ontology could be repaired using justifications (minimal entailing subsets of an ontology). Although over 600 justifications for the inconsistency were initially computed, we found that there were three main manageable patterns or categories of justifications involving TBox and ABox axioms. These categories represented meaningful domain errors to an expert panel of ICTM project knowledge engineers, who were able to use them to successfully determine the axioms that needed to be revised in order to fix the problem. All members of the expert panel agreed that the approach was useful for debugging and ensuring the quality of ICTM. PMID:25954373

  13. CiTO, the Citation Typing Ontology

    PubMed Central

    2010-01-01

    CiTO, the Citation Typing Ontology, is an ontology for describing the nature of reference citations in scientific research articles and other scholarly works, both to other such publications and also to Web information resources, and for publishing these descriptions on the Semantic Web. Citation are described in terms of the factual and rhetorical relationships between citing publication and cited publication, the in-text and global citation frequencies of each cited work, and the nature of the cited work itself, including its publication and peer review status. This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks. The latest version of CiTO, which this paper describes, is CiTO Version 1.6, published on 19 March 2010. CiTO is written in the Web Ontology Language OWL, uses the namespace http://purl.org/net/cito/, and is available from http://purl.org/net/cito/. This site uses content negotiation to deliver to the user an OWLDoc Web version of the ontology if accessed via a Web browser, or the OWL ontology itself if accessed from an ontology management tool such as Protégé 4 (http://protege.stanford.edu/). Collaborative work is currently under way to harmonize CiTO with other ontologies describing bibliographies and the rhetorical structure of scientific discourse. PMID:20626926

  14. A Method for Recommending Ontology Alignment Strategies

    NASA Astrophysics Data System (ADS)

    Tan, He; Lambrix, Patrick

    In different areas ontologies have been developed and many of these ontologies contain overlapping information. Often we would therefore want to be able to use multiple ontologies. To obtain good results, we need to find the relationships between terms in the different ontologies, i.e. we need to align them. Currently, there already exist a number of different alignment strategies. However, it is usually difficult for a user that needs to align two ontologies to decide which of the different available strategies are the most suitable. In this paper we propose a method that provides recommendations on alignment strategies for a given alignment problem. The method is based on the evaluation of the different available alignment strategies on several small selected pieces from the ontologies, and uses the evaluation results to provide recommendations. In the paper we give the basic steps of the method, and then illustrate and discuss the method in the setting of an alignment problem with two well-known biomedical ontologies. We also experiment with different implementations of the steps in the method.

  15. Towards an Ontology of Data Mining Investigations

    NASA Astrophysics Data System (ADS)

    Panov, Panče; Soldatova, Larisa N.; Džeroski, Sašo

    Motivated by the need for unification of the domain of data mining and the demand for formalized representation of outcomes of data mining investigations, we address the task of constructing an ontology of data mining. In this paper we present an updated version of the OntoDM ontology, that is based on a recent proposal of a general framework for data mining and it is aligned with the ontology of biomedical investigations (OBI) . The ontology aims at describing and formalizing entities from the domain of data mining and knowledge discovery. It includes definitions of basic data mining entities (e.g., datatype, dataset, data mining task, data mining algorithm etc.) and allows extensions with more complex data mining entities (e.g. constraints, data mining scenarios and data mining experiments). Unlike most existing approaches to constructing ontologies of data mining, OntoDM is compliant to best practices in engineering ontologies that describe scientific investigations (e.g., OBI ) and is a step towards an ontology of data mining investigations. OntoDM is available at: http://kt.ijs.si/panovp/OntoDM/ .

  16. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  17. The Ontology Definition Metamodel (ODM)

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    There were four separate proposals for the ODM in response to the OMG’s ODM RFP (2003) submitted by the following OMG members: IBM (ODM IBM 2003), Gentleware (ODM Gentleware 2003), DSTC (ODM DSTC 2003), and Sandpiper Software Inc and KSL (ODM Sandpiper&KSL 2003). However, none of those submissions made a comprehensive proposal. For example, none of them proposed XMI bindings for the ODM, none of them proposed mappings between the ODM and OWL, and only IBM (ODM IBM 2003) and Gentleware (ODM Gentleware 2003) proposed an Ontology UML profile. Accordingly, the OMG partners decided to join their efforts, and the current result of their efforts together, is the ODM joint submission (OMG ODM 2004).

  18. Nuclear Nonproliferation Ontology Assessment Team Final Report

    SciTech Connect

    Strasburg, Jana D.; Hohimer, Ryan E.

    2012-01-01

    Final Report for the NA22 Simulations, Algorithm and Modeling (SAM) Ontology Assessment Team's efforts from FY09-FY11. The Ontology Assessment Team began in May 2009 and concluded in September 2011. During this two-year time frame, the Ontology Assessment team had two objectives: (1) Assessing the utility of knowledge representation and semantic technologies for addressing nuclear nonproliferation challenges; and (2) Developing ontological support tools that would provide a framework for integrating across the Simulation, Algorithm and Modeling (SAM) program. The SAM Program was going through a large assessment and strategic planning effort during this time and as a result, the relative importance of these two objectives changed, altering the focus of the Ontology Assessment Team. In the end, the team conducted an assessment of the state of art, created an annotated bibliography, and developed a series of ontological support tools, demonstrations and presentations. A total of more than 35 individuals from 12 different research institutions participated in the Ontology Assessment Team. These included subject matter experts in several nuclear nonproliferation-related domains as well as experts in semantic technologies. Despite the diverse backgrounds and perspectives, the Ontology Assessment team functioned very well together and aspects could serve as a model for future inter-laboratory collaborations and working groups. While the team encountered several challenges and learned many lessons along the way, the Ontology Assessment effort was ultimately a success that led to several multi-lab research projects and opened up a new area of scientific exploration within the Office of Nuclear Nonproliferation and Verification.

  19. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  20. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  1. Design and Implementation of Hydrologic Process Knowledge-base Ontology: A case study for the Infiltration Process

    NASA Astrophysics Data System (ADS)

    Elag, M.; Goodall, J. L.

    2013-12-01

    service is provided for semantic-based querying of the ontology.

  2. Hierarchical Analysis of the Omega Ontology

    SciTech Connect

    Joslyn, Cliff A.; Paulson, Patrick R.

    2009-12-01

    Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

  3. Beyond the Ontology Definition Metamodel: Applications

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    The previous chapters provided a detailed overview of the elements defined in the ODM specification, along with possible tool support and examples of the developed ontologies. In this chapter, we analyze the research results that go beyond the ODM specification and focus on several different applications of the ODM. We start with a description of the first implementation of ODM. Next, we analyze how the ODM-based metamodels can be used for model driven engineering of ontology reasoners. Finally, we show how the ODM is applied in collaboration with many other different languages. This includes UML, programming languages, Semantic Web ontology, and rule languages.

  4. Ontological Stratification in an Ecology of Infohabitants

    NASA Astrophysics Data System (ADS)

    Abramov, V. A.; Goossenaerts, J. B. M.; de Wilde, P.; Correia, L.

    This paper reports progress from the EEII research project where ontological stratification is applied in the study of openness. We explain a stratification approach to reduce the overall complexity of conceptual models, and to enhance their modularity. A distinction is made between ontological and epistemological stratification. The application of the stratification approach to agent system design is explained and illustrated. A preliminary characterization of the relevant strata is given. The wider relevance of this result for information infrastructure design is addressed: ontological stratification will be key to the model management and semantic interoperability in a ubiquitous and model driven information infrastructure.

  5. Matching arthropod anatomy ontologies to the Hymenoptera Anatomy Ontology: results from a manual alignment

    PubMed Central

    Bertone, Matthew A.; Mikó, István; Yoder, Matthew J.; Seltmann, Katja C.; Balhoff, James P.; Deans, Andrew R.

    2013-01-01

    Matching is an important step for increasing interoperability between heterogeneous ontologies. Here, we present alignments we produced as domain experts, using a manual mapping process, between the Hymenoptera Anatomy Ontology and other existing arthropod anatomy ontologies (representing spiders, ticks, mosquitoes and Drosophila melanogaster). The resulting alignments contain from 43 to 368 mappings (correspondences), all derived from domain-expert input. Despite the many pairwise correspondences, only 11 correspondences were found in common between all ontologies, suggesting either major intrinsic differences between each ontology or gaps in representing each group’s anatomy. Furthermore, we compare our findings with putative correspondences from Bioportal (derived from LOOM software) and summarize the results in a total evidence alignment. We briefly discuss characteristics of the ontologies and issues with the matching process. Database URL: http://purl.obolibrary.org/obo/hao/2012-07-18/arthropod-mappings.obo PMID:23303300

  6. Matching arthropod anatomy ontologies to the Hymenoptera Anatomy Ontology: results from a manual alignment.

    PubMed

    Bertone, Matthew A; Mikó, István; Yoder, Matthew J; Seltmann, Katja C; Balhoff, James P; Deans, Andrew R

    2013-01-01

    Matching is an important step for increasing interoperability between heterogeneous ontologies. Here, we present alignments we produced as domain experts, using a manual mapping process, between the Hymenoptera Anatomy Ontology and other existing arthropod anatomy ontologies (representing spiders, ticks, mosquitoes and Drosophila melanogaster). The resulting alignments contain from 43 to 368 mappings (correspondences), all derived from domain-expert input. Despite the many pairwise correspondences, only 11 correspondences were found in common between all ontologies, suggesting either major intrinsic differences between each ontology or gaps in representing each group's anatomy. Furthermore, we compare our findings with putative correspondences from Bioportal (derived from LOOM software) and summarize the results in a total evidence alignment. We briefly discuss characteristics of the ontologies and issues with the matching process.

  7. An empirical analysis of ontology reuse in BioPortal.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Arabandi, Sivaram; Tudorache, Tania; Musen, Mark A

    2017-07-01

    Biomedical ontologies often reuse content (i.e., classes and properties) from other ontologies. Content reuse enables a consistent representation of a domain and reusing content can save an ontology author significant time and effort. Prior studies have investigated the existence of reused terms among the ontologies in the NCBO BioPortal, but as of yet there has not been a study investigating how the ontologies in BioPortal utilize reused content in the modeling of their own content. In this study we investigate how 355 ontologies hosted in the NCBO BioPortal reuse content from other ontologies for the purposes of creating new ontology content. We identified 197 ontologies that reuse content. Among these ontologies, 108 utilize reused classes in the modeling of their own classes and 116 utilize reused properties in class restrictions. Current utilization of reuse and quality issues related to reuse are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    PubMed

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the Open

  9. GOPET: a tool for automated predictions of Gene Ontology terms.

    PubMed

    Vinayagam, Arunachalam; del Val, Coral; Schubert, Falk; Eils, Roland; Glatting, Karl-Heinz; Suhai, Sándor; König, Rainer

    2006-03-20

    Vast progress in sequencing projects has called for annotation on a large scale. A Number of methods have been developed to address this challenging task. These methods, however, either apply to specific subsets, or their predictions are not formalised, or they do not provide precise confidence values for their predictions. We recently established a learning system for automated annotation, trained with a broad variety of different organisms to predict the standardised annotation terms from Gene Ontology (GO). Now, this method has been made available to the public via our web-service GOPET (Gene Ontology term Prediction and Evaluation Tool). It supplies annotation for sequences of any organism. For each predicted term an appropriate confidence value is provided. The basic method had been developed for predicting molecular function GO-terms. It is now expanded to predict biological process terms. This web service is available via http://genius.embnet.dkfz-heidelberg.de/menu/biounit/open-husar Our web service gives experimental researchers as well as the bioinformatics community a valuable sequence annotation device. Additionally, GOPET also provides less significant annotation data which may serve as an extended discovery platform for the user.

  10. A top-level ontology of functions and its application in the Open Biomedical Ontologies.

    PubMed

    Burek, Patryk; Hoehndorf, Robert; Loebe, Frank; Visagie, Johann; Herre, Heinrich; Kelso, Janet

    2006-07-15

    A clear understanding of functions in biology is a key component in accurate modelling of molecular, cellular and organismal biology. Using the existing biomedical ontologies it has been impossible to capture the complexity of the community's knowledge about biological functions. We present here a top-level ontological framework for representing knowledge about biological functions. This framework lends greater accuracy, power and expressiveness to biomedical ontologies by providing a means to capture existing functional knowledge in a more formal manner. An initial major application of the ontology of functions is the provision of a principled way in which to curate functional knowledge and annotations in biomedical ontologies. Further potential applications include the facilitation of ontology interoperability and automated reasoning. A major advantage of the proposed implementation is that it is an extension to existing biomedical ontologies, and can be applied without substantial changes to these domain ontologies. The Ontology of Functions (OF) can be downloaded in OWL format from http://onto.eva.mpg.de/. Additionally, a UML profile and supplementary information and guides for using the OF can be accessed from the same website.

  11. Probabilistic Ontology Architecture for a Terrorist Identification Decision Support System

    DTIC Science & Technology

    2014-06-01

    ontology is used to capture consensual knowledge about a domain of interest [8]. Selection of the appropriate ontological engineering methodology is...Ontological engineering ensures the development of an explicit, logical and defensible ontologies for knowledge - sharing and reuse that will be...extended to become the TIDPO. 4) Ontological Learning. There are several methods to aid in the knowledge acquisition process required to build an

  12. Modeling biochemical pathways in the gene ontology

    SciTech Connect

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.; Mungall, Christopher J.; Renedo, Nikolai; Blake, Judith A.

    2016-09-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes in the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis.

  13. The Gene Ontology: enhancements for 2011.

    PubMed

    2012-01-01

    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources.

  14. The Gene Ontology: enhancements for 2011

    PubMed Central

    2012-01-01

    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources. PMID:22102568

  15. The pathway ontology - updates and applications.

    PubMed

    Petri, Victoria; Jayaraman, Pushkala; Tutaj, Marek; Hayman, G Thomas; Smith, Jennifer R; De Pons, Jeff; Laulederkind, Stanley Jf; Lowry, Timothy F; Nigam, Rajni; Wang, Shur-Jen; Shimoyama, Mary; Dwinell, Melinda R; Munzenmaier, Diane H; Worthey, Elizabeth A; Jacob, Howard J

    2014-02-05

    The Pathway Ontology (PW) developed at the Rat Genome Database (RGD), covers all types of biological pathways, including altered and disease pathways and captures the relationships between them within the hierarchical structure of a directed acyclic graph. The ontology allows for the standardized annotation of rat, and of human and mouse genes to pathway terms. It also constitutes a vehicle for easy navigation between gene and ontology report pages, between reports and interactive pathway diagrams, between pathways directly connected within a diagram and between those that are globally related in pathway suites and suite networks. Surveys of the literature and the development of the Pathway and Disease Portals are important sources for the ongoing development of the ontology. User requests and mapping of pathways in other databases to terms in the ontology further contribute to increasing its content. Recently built automated pipelines use the mapped terms to make available the annotations generated by other groups. The two released pipelines - the Pathway Interaction Database (PID) Annotation Import Pipeline and the Kyoto Encyclopedia of Genes and Genomes (KEGG) Annotation Import Pipeline, make available over 7,400 and 31,000 pathway gene annotations, respectively. Building the PID pipeline lead to the addition of new terms within the signaling node, also augmented by the release of the RGD "Immune and Inflammatory Disease Portal" at that time. Building the KEGG pipeline lead to a substantial increase in the number of disease pathway terms, such as those within the 'infectious disease pathway' parent term category. The 'drug pathway' node has also seen increases in the number of terms as well as a restructuring of the node. Literature surveys, disease portal deployments and user requests have contributed and continue to contribute additional new terms across the ontology. Since first presented, the content of PW has increased by over 75%. Ongoing development of

  16. The pathway ontology – updates and applications

    PubMed Central

    2014-01-01

    Background The Pathway Ontology (PW) developed at the Rat Genome Database (RGD), covers all types of biological pathways, including altered and disease pathways and captures the relationships between them within the hierarchical structure of a directed acyclic graph. The ontology allows for the standardized annotation of rat, and of human and mouse genes to pathway terms. It also constitutes a vehicle for easy navigation between gene and ontology report pages, between reports and interactive pathway diagrams, between pathways directly connected within a diagram and between those that are globally related in pathway suites and suite networks. Surveys of the literature and the development of the Pathway and Disease Portals are important sources for the ongoing development of the ontology. User requests and mapping of pathways in other databases to terms in the ontology further contribute to increasing its content. Recently built automated pipelines use the mapped terms to make available the annotations generated by other groups. Results The two released pipelines – the Pathway Interaction Database (PID) Annotation Import Pipeline and the Kyoto Encyclopedia of Genes and Genomes (KEGG) Annotation Import Pipeline, make available over 7,400 and 31,000 pathway gene annotations, respectively. Building the PID pipeline lead to the addition of new terms within the signaling node, also augmented by the release of the RGD “Immune and Inflammatory Disease Portal” at that time. Building the KEGG pipeline lead to a substantial increase in the number of disease pathway terms, such as those within the ‘infectious disease pathway’ parent term category. The ‘drug pathway’ node has also seen increases in the number of terms as well as a restructuring of the node. Literature surveys, disease portal deployments and user requests have contributed and continue to contribute additional new terms across the ontology. Since first presented, the content of PW has increased by

  17. Experimental Verification of Look-Up Table Based Real-Time Commutation of 6-DOF Planar Actuators

    NASA Astrophysics Data System (ADS)

    Boeij, Jeroen De; Lomonova, Elena

    The control of contactless magnetically levitated planar actuators with stationary coils, moving magnets and 6-DOF is very complicated. In contradiction to normal synchronous AC machines the forces and torques cannot be decoupled using a sinusoidal commutation scheme. Instead, a feedback linearization law has to be applied as commutation scheme that decouples the forces and torques and calculates the required currents to realize the desired forces and torques of the magnetic suspension. This feedback linearization law is based on the coupling matrix that links the current in each coil to the force and torque vector on the actuator. The accurate calculation of this coupling matrix in real-time is critical for controlling the planar actuator. In this paper a look-up table based method is used to apply feedback linearization and the performance of the algorithm is verified with measurements.

  18. Lookup-table-based inverse model for human skin reflectance spectroscopy: two-layered Monte Carlo simulations and experiments.

    PubMed

    Zhong, Xiewei; Wen, Xiang; Zhu, Dan

    2014-01-27

    Fiber reflectance spectroscopy is a non-invasive method for diagnosing skin diseases or evaluating aesthetic efficacy, but it is dependent on the inverse model validity. In this work, a lookup-table-based inverse model is developed using two-layered Monte Carlo simulations in order to extract the physiological and optical properties of skin. The melanin volume fraction and blood oxygen parameters are extracted from fiber reflectance spectra of in vivo human skin. The former indicates good coincidence with a commercial skin-melanin probe, and the latter (based on forearm venous occlusion and ischemia, and hot compress experiment) shows that the measurements are in agreement with physiological changes. These results verify the potential of this spectroscopy technique for evaluating the physiological characteristics of human skin.

  19. Ontology-Driven Disability-Aware E-Learning Personalisation with ONTODAPS

    ERIC Educational Resources Information Center

    Nganji, Julius T.; Brayshaw, Mike; Tompsett, Brian

    2013-01-01

    Purpose: The purpose of this paper is to show how personalisation of learning resources and services can be achieved for students with and without disabilities, particularly responding to the needs of those with multiple disabilities in e-learning systems. The paper aims to introduce ONTODAPS, the Ontology-Driven Disability-Aware Personalised…

  20. Ontology-Driven Disability-Aware E-Learning Personalisation with ONTODAPS

    ERIC Educational Resources Information Center

    Nganji, Julius T.; Brayshaw, Mike; Tompsett, Brian

    2013-01-01

    Purpose: The purpose of this paper is to show how personalisation of learning resources and services can be achieved for students with and without disabilities, particularly responding to the needs of those with multiple disabilities in e-learning systems. The paper aims to introduce ONTODAPS, the Ontology-Driven Disability-Aware Personalised…

  1. GFVO: the Genomic Feature and Variation Ontology.

    PubMed

    Baran, Joachim; Durgahee, Bibi Sehnaaz Begum; Eilbeck, Karen; Antezana, Erick; Hoehndorf, Robert; Dumontier, Michel

    2015-01-01

    Falling costs in genomic laboratory experiments have led to a steady increase of genomic feature and variation data. Multiple genomic data formats exist for sharing these data, and whilst they are similar, they are addressing slightly different data viewpoints and are consequently not fully compatible with each other. The fragmentation of data format specifications makes it hard to integrate and interpret data for further analysis with information from multiple data providers. As a solution, a new ontology is presented here for annotating and representing genomic feature and variation dataset contents. The Genomic Feature and Variation Ontology (GFVO) specifically addresses genomic data as it is regularly shared using the GFF3 (incl. FASTA), GTF, GVF and VCF file formats. GFVO simplifies data integration and enables linking of genomic annotations across datasets through common semantics of genomic types and relations. Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology's GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use.

  2. A Chronostratigraphic Relational Database Ontology

    NASA Astrophysics Data System (ADS)

    Platon, E.; Gary, A.; Sikora, P.

    2005-12-01

    A chronostratigraphic research database was donated by British Petroleum to the Stratigraphy Group at the Energy and Geoscience Institute (EGI), University of Utah. These data consists of over 2,000 measured sections representing over three decades of research into the application of the graphic correlation method. The data are global and includes both microfossil (foraminifera, calcareous nannoplankton, spores, pollen, dinoflagellate cysts, etc) and macrofossil data. The objective of the donation was to make the research data available to the public in order to encourage additional chronostratigraphy studies, specifically regarding graphic correlation. As part of the National Science Foundation's Cyberinfrastructure for the Geosciences (GEON) initiative these data have been made available to the public at http://css.egi.utah.edu. To encourage further research using the graphic correlation method, EGI has developed a software package, StrataPlot that will soon be publicly available from the GEON website as a standalone software download. The EGI chronostratigraphy research database, although relatively large, has many data holes relative to some paleontological disciplines and geographical areas, so the challenge becomes how do we expand the data available for chronostratigrahic studies using graphic correlation. There are several public or soon-to-be public databases available to chronostratigraphic research, but they have their own data structures and modes of presentation. The heterogeneous nature of these database schemas hinders their integration and makes it difficult for the user to retrieve and consolidate potentially valuable chronostratigraphic data. The integration of these data sources would facilitate rapid and comprehensive data searches, thus helping advance studies in chronostratigraphy. The GEON project will host a number of databases within the geology domain, some of which contain biostratigraphic data. Ontologies are being developed to provide

  3. An ontology for major histocompatibility restriction.

    PubMed

    Vita, Randi; Overton, James A; Seymour, Emily; Sidney, John; Kaufman, Jim; Tallmadge, Rebecca L; Ellis, Shirley; Hammond, John; Butcher, Geoff W; Sette, Alessandro; Peters, Bjoern

    2016-01-01

    MHC molecules are a highly diverse family of proteins that play a key role in cellular immune recognition. Over time, different techniques and terminologies have been developed to identify the specific type(s) of MHC molecule involved in a specific immune recognition context. No consistent nomenclature exists across different vertebrate species. To correctly represent MHC related data in The Immune Epitope Database (IEDB), we built upon a previously established MHC ontology and created an ontology to represent MHC molecules as they relate to immunological experiments. This ontology models MHC protein chains from 16 species, deals with different approaches used to identify MHC, such as direct sequencing verses serotyping, relates engineered MHC molecules to naturally occurring ones, connects genetic loci, alleles, protein chains and multi-chain proteins, and establishes evidence codes for MHC restriction. Where available, this work is based on existing ontologies from the OBO foundry. Overall, representing MHC molecules provides a challenging and practically important test case for ontology building, and could serve as an example of how to integrate other ontology building efforts into web resources.

  4. Ontological Modeling for Integrated Spacecraft Analysis

    NASA Technical Reports Server (NTRS)

    Wicks, Erica

    2011-01-01

    Current spacecraft work as a cooperative group of a number of subsystems. Each of these requiresmodeling software for development, testing, and prediction. It is the goal of my team to create anoverarching software architecture called the Integrated Spacecraft Analysis (ISCA) to aid in deploying the discrete subsystems' models. Such a plan has been attempted in the past, and has failed due to the excessive scope of the project. Our goal in this version of ISCA is to use new resources to reduce the scope of the project, including using ontological models to help link the internal interfaces of subsystems' models with the ISCA architecture.I have created an ontology of functions specific to the modeling system of the navigation system of a spacecraft. The resulting ontology not only links, at an architectural level, language specificinstantiations of the modeling system's code, but also is web-viewable and can act as a documentation standard. This ontology is proof of the concept that ontological modeling can aid in the integration necessary for ISCA to work, and can act as the prototype for future ISCA ontologies.

  5. The Development of Ontology from Multiple Databases

    NASA Astrophysics Data System (ADS)

    Kasim, Shahreen; Aswa Omar, Nurul; Fudzee, Mohd Farhan Md; Azhar Ramli, Azizul; Aizi Salamat, Mohamad; Mahdin, Hairulnizam

    2017-08-01

    The area of halal industry is the fastest growing global business across the world. The halal food industry is thus crucial for Muslims all over the world as it serves to ensure them that the food items they consume daily are syariah compliant. Currently, ontology has been widely used in computer sciences area such as web on the heterogeneous information processing, semantic web, and information retrieval. However, ontology has still not been used widely in the halal industry. Today, Muslim community still have problem to verify halal status for products in the market especially foods consisting of E number. This research tried to solve problem in validating the halal status from various halal sources. There are various chemical ontology from multilple databases found to help this ontology development. The E numbers in this chemical ontology are codes for chemicals that can be used as food additives. With this E numbers ontology, Muslim community could identify and verify the halal status effectively for halal products in the market.

  6. An Approach to Support Collaborative Ontology Construction.

    PubMed

    Tahar, Kais; Schaaf, Michael; Jahn, Franziska; Kücherer, Christian; Paech, Barbara; Herre, Heinrich; Winter, Alfred

    2016-01-01

    The increasing number of terms used in textbooks for information management (IM) in hospitals makes it difficult for medical informatics students to grasp IM concepts and their interrelations. Formal ontologies which comprehend and represent the essential content of textbooks can facilitate the learning process in IM education. The manual construction of such ontologies is time-consuming and thus very expensive [3]. Moreover, most domain experts lack skills in using a formal language like OWL [2] and usually have no experience with standard editing tools like Protégé http://protege.stanford.edu [4,5]. This paper presents an ontology modeling approach based on Excel2OWL, a self-developed tool which efficiently supports domain experts in collaboratively constructing ontologies from textbooks. This approach was applied to classic IM textbooks, resulting in an ontology called SNIK. Our method facilitates the collaboration between domain experts and ontologists in the development process. Furthermore, the proposed approach enables ontologists to detect modeling errors and also to evaluate and improve the quality of the resulting ontology rapidly. This approach allows us to visualize the modeled textbooks and to analyze their semantics automatically. Hence, it can be used for e-learning purposes, particularly in the field of IM in hospitals.

  7. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance.

    PubMed

    Cui, Licong

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance

  8. Developing a modular hydrogeology ontology by extending the SWEET upper-level ontologies

    NASA Astrophysics Data System (ADS)

    Tripathi, Ajay; Babaie, Hassan A.

    2008-09-01

    Upper-level ontologies comprise general concepts and properties which need to be extended to include more diverse and specific domain vocabularies. We present the extension of NASA's Semantic Web for Earth and Environmental Terminology (SWEET) ontologies to include part of the hydrogeology domain. We describe a methodology that can be followed by other allied domain experts who intend to adopt the SWEET ontologies in their own discipline. We have maintained the modular design of the SWEET ontologies for maximum extensibility and reusability of our ontology in other fields, to ensure inter-disciplinary knowledge reuse, management, and discovery. The extension of the SWEET ontologies involved identification of the general SWEET concepts (classes) to serve as the super-class of the domain concepts. This was followed by establishing the special inter-relationships between domain concepts (e.g., equivalence for vadose zone and unsaturated zone), and identifying the dependent concepts such as physical properties and units, and their relationship to external concepts. Ontology editing tools such as SWOOP and Protégé were used to analyze and visualize the structure of the existing OWL files. Domain concepts were introduced either as standalone new classes or as subclasses of existing SWEET ontologies. This involved changing the relationships (properties) and/or adding new relationships based on domain theories. In places, in the Owl files, the entire structure of the existing concepts needed to be changed to represent the domain concept more meaningfully. Throughout this process, the orthogonal structure of SWEET ontologies was maintained and the consistency of the concepts was tested using the Racer reasoner. Individuals were added to the new concepts to test the modified ontologies. Our work shows that SWEET ontologies can successfully be extended and reused in any field without losing their modular or reference structure, or disrupting their URI links.

  9. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance

    PubMed Central

    Cui, Licong

    2015-01-01

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance

  10. The Locus Lookup Tool at MaizeGDB: Identification of Genomic Regions in Maize by Integrating Sequence Information with Physical and Genetic Maps

    USDA-ARS?s Scientific Manuscript database

    Methods to automatically integrate sequence information with physical and genetic maps are scarce. The Locus Lookup Tool enables researchers to define windows of genomic sequence likely to contain loci of interest where only genetic or physical mapping associations are reported. Using the Locus Look...

  11. What is nature capable of? Evidence, ontology and speculative medical humanities.

    PubMed

    Savransky, Martin; Rosengarten, Marsha

    2016-09-01

    Expanding on the recent call for a 'critical medical humanities' to intervene in questions of the ontology of health, this article develops a what we call a 'speculative' orientation to such interventions in relation to some of the ontological commitments on which contemporary biomedical cultures rest. We argue that crucial to this task is an approach to ontology that treats it not as a question of first principles, but as a matter of the consequences of the images of nature that contemporary biomedical research practices espouse when they make claims to evidence, as well as the possible consequences of imagining different worlds in which health and disease processes partake. By attending to the implicit ontological assumptions involved in the method par excellence of biomedical research, namely the randomised controlled trial (RCT), we argue that the mechanistic ontology that tacitly informs evidence-based biomedical research simultaneously authorises a series of problematic consequences for understanding and intervening practically in the concrete realities of health. As a response, we develop an alternative ontological proposition that regards processes of health and disease as always situated achievements. We show that, without disqualifying RCT-based evidence, such a situated ontology enables one to resist the reduction of the realities of health and disease to biomedicine's current forms of explanation. In so doing, we call for medical humanities scholars to actively engage in the speculative question of what nature may be capable of. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Speeding up ontology creation of scientific terms

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Graybeal, J.

    2005-12-01

    An ontology is a formal specification of a controlled vocabulary. Ontologies are composed of classes (similar to categories), individuals (members of classes) and properties (attributes of the individuals). Having vocabularies expressed in a formal specification like the Web Ontology Language (OWL) enables interoperability due to the comprehensiveness of OWL by software programs. Two main non-inclusive strategies exist when constructing an ontology: an up-down approach and a bottom-up approach. The former one is directed towards the creation of top classes first (main concepts) and then finding the required subclasses and individuals. The later approach starts from the individuals and then finds similar properties promoting the creation of classes. At the Marine Metadata Interoperability (MMI) Initiative we used a bottom-up approach to create ontologies from simple-vocabularies (those that are not expressed in a conceptual way). We found that the vocabularies were available in different formats (relational data bases, plain files, HTML, XML, PDF) and sometimes were composed of thousands of terms, making the ontology creation process a very time consuming activity. To expedite the conversion process we created a tool VOC2OWL that takes a vocabulary in a table like structure (CSV or TAB format) and a conversion-property file to create automatically an ontology. We identified two basic structures of simple-vocabularies: Flat vocabularies (e.g., phone directory) and hierarchical vocabularies (e.g., taxonomies). The property file defines a list of attributes for the conversion process for each structure type. The attributes included metadata information (title, description, subject, contributor, urlForMoreInformation) and conversion flags (treatAsHierarchy, generateAutoIds) and other conversion information needed to create the ontology (columnForPrimaryClass, columnsToCreateClassesFrom, fileIn, fileOut, namespace, format). We created more than 50 ontologies and

  13. A 2013 workshop: vaccine and drug ontology studies (VDOS 2013).

    PubMed

    Tao, Cui; He, Yongqun; Arabandi, Sivaram

    2014-03-20

    The 2013 "Vaccine and Drug Ontology Studies" (VDOS 2013) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have contributed to dramatic improvements in public health worldwide. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs - extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, as well as developing new models such as Vaccine Ontology. The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The six full-length papers included in this thematic issue focuses on three main areas: (i) ontology development and representation, (ii) ontology mapping, maintaining and auditing, and (iii) ontology applications.

  14. Spectral retrieval of latent heating profiles from TRMM PR data: comparisons of lookup tables from two- and three-dimensional simulations

    NASA Astrophysics Data System (ADS)

    Shige, Shoichi; Takayabu, Yukari N.; Kida, Satoshi; Tao, Wei-Kuo; Zeng, Xiping; L'Ecuyer, Tristan

    2008-12-01

    The Spectral Latent Heating (SLH) algorithm was developed to estimate latent heating profiles for the TRMM PR. The method uses PR information (precipitation top height, precipitation rates at the surface and melting level, and rain type) to select heating profiles from lookup tables. Lookup tables for the three rain types-convective, shallow stratiform, and anvil rain (deep stratiform with a melting level)-were derived from numerical simulations of tropical cloud systems from the Tropical Ocean Global Atmosphere (TOGA) Coupled Ocean-Atmosphere Response Experiment (COARE) utilizing a cloud-resolving model (CRM). The two-dimensional ("2D") CRM was used in the previous studies. The availability of exponentially increasing computer capabilities has resulted in three-dimensional ("3D") CRM simulations for multiday periods becoming increasing prevalent. In this study, we compare lookup tables from the 2D and 3D simulations. The lookup table from 3D simulations results in less agreement between the SLH-retrieved heating and sounding-based one for the South China Sea Monsoon Experiment (SCSMEX). The level of SLH-estimated maximum heating is lower than that of the sounding-derived one. This is explained by the fact that the 3D lookup table produces stronger convective heating and weaker stratiform heating above the melting level that 2D counterpart. Condensate generated in and carried over from the convective region is larger in 3D than in 2D, and condensate that is produced by the stratiform region's own upward motion is smaller in 3D than 2D.

  15. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    ERIC Educational Resources Information Center

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  16. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    ERIC Educational Resources Information Center

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  17. The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery

    PubMed Central

    2014-01-01

    The Semanticscience Integrated Ontology (SIO) is an ontology to facilitate biomedical knowledge discovery. SIO features a simple upper level comprised of essential types and relations for the rich description of arbitrary (real, hypothesized, virtual, fictional) objects, processes and their attributes. SIO specifies simple design patterns to describe and associate qualities, capabilities, functions, quantities, and informational entities including textual, geometrical, and mathematical entities, and provides specific extensions in the domains of chemistry, biology, biochemistry, and bioinformatics. SIO provides an ontological foundation for the Bio2RDF linked data for the life sciences project and is used for semantic integration and discovery for SADI-based semantic web services. SIO is freely available to all users under a creative commons by attribution license. See website for further information: http://sio.semanticscience.org. PMID:24602174

  18. An ontology-based hierarchical semantic modeling approach to clinical pathway workflows.

    PubMed

    Ye, Yan; Jiang, Zhibin; Diao, Xiaodi; Yang, Dong; Du, Gang

    2009-08-01

    This paper proposes an ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services. A clinical pathway ontology (CPO) is formally defined in OWL web ontology language (OWL) to provide common semantic foundation for meaningful representation and exchange of pathway-related knowledge. A CPO-based semantic modeling method is then presented to describe clinical pathways as interconnected hierarchical models including the top-level outcome flow and intervention workflow level along a care timeline. Furthermore, relevant temporal knowledge can be fully represented by combing temporal entities in CPO and temporal rules based on semantic web rule language (SWRL). An illustrative example about a clinical pathway for cesarean section shows the applicability of the proposed methodology in enabling structured semantic descriptions of any real clinical pathway.

  19. The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery.

    PubMed

    Dumontier, Michel; Baker, Christopher Jo; Baran, Joachim; Callahan, Alison; Chepelev, Leonid; Cruz-Toledo, José; Del Rio, Nicholas R; Duck, Geraint; Furlong, Laura I; Keath, Nichealla; Klassen, Dana; McCusker, James P; Queralt-Rosinach, Núria; Samwald, Matthias; Villanueva-Rosales, Natalia; Wilkinson, Mark D; Hoehndorf, Robert

    2014-03-06

    The Semanticscience Integrated Ontology (SIO) is an ontology to facilitate biomedical knowledge discovery. SIO features a simple upper level comprised of essential types and relations for the rich description of arbitrary (real, hypothesized, virtual, fictional) objects, processes and their attributes. SIO specifies simple design patterns to describe and associate qualities, capabilities, functions, quantities, and informational entities including textual, geometrical, and mathematical entities, and provides specific extensions in the domains of chemistry, biology, biochemistry, and bioinformatics. SIO provides an ontological foundation for the Bio2RDF linked data for the life sciences project and is used for semantic integration and discovery for SADI-based semantic web services. SIO is freely available to all users under a creative commons by attribution license. See website for further information: http://sio.semanticscience.org.

  20. Development and use of Ontologies Inside the Neuroscience Information Framework: A Practical Approach

    PubMed Central

    Imam, Fahim T.; Larson, Stephen D.; Bandrowski, Anita; Grethe, Jeffery S.; Gupta, Amarnath; Martone, Maryann E.

    2012-01-01

    An initiative of the NIH Blueprint for neuroscience research, the Neuroscience Information Framework (NIF) project advances neuroscience by enabling discovery and access to public research data and tools worldwide through an open source, semantically enhanced search portal. One of the critical components for the overall NIF system, the NIF Standardized Ontologies (NIFSTD), provides an extensive collection of standard neuroscience concepts along with their synonyms and relationships. The knowledge models defined in the NIFSTD ontologies enable an effective concept-based search over heterogeneous types of web-accessible information entities in NIF’s production system. NIFSTD covers major domains in neuroscience, including diseases, brain anatomy, cell types, sub-cellular anatomy, small molecules, techniques, and resource descriptors. Since the first production release in 2008, NIF has grown significantly in content and functionality, particularly with respect to the ontologies and ontology-based services that drive the NIF system. We present here on the structure, design principles, community engagement, and the current state of NIFSTD ontologies. PMID:22737162

  1. OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts

    PubMed Central

    Ravagli, Carlo; Pognan, Francois

    2017-01-01

    Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099

  2. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  3. Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.

    PubMed

    Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-01-01

    Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.

  4. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological

  5. GFVO: the Genomic Feature and Variation Ontology

    PubMed Central

    Durgahee, Bibi Sehnaaz Begum; Eilbeck, Karen; Antezana, Erick; Hoehndorf, Robert; Dumontier, Michel

    2015-01-01

    Falling costs in genomic laboratory experiments have led to a steady increase of genomic feature and variation data. Multiple genomic data formats exist for sharing these data, and whilst they are similar, they are addressing slightly different data viewpoints and are consequently not fully compatible with each other. The fragmentation of data format specifications makes it hard to integrate and interpret data for further analysis with information from multiple data providers. As a solution, a new ontology is presented here for annotating and representing genomic feature and variation dataset contents. The Genomic Feature and Variation Ontology (GFVO) specifically addresses genomic data as it is regularly shared using the GFF3 (incl. FASTA), GTF, GVF and VCF file formats. GFVO simplifies data integration and enables linking of genomic annotations across datasets through common semantics of genomic types and relations. Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology’s GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use. PMID:26019997

  6. Ion Channel ElectroPhysiology Ontology (ICEPO) - a case study of text mining assisted ontology development.

    PubMed

    Elayavilli, Ravikumar Komandur; Liu, Hongfang

    2016-01-01

    Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological

  7. Building a semi-automatic ontology learning and construction system for geosciences

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  8. Semantic Wiki as a Basis for Software Engineering Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Kasisopha, Natsuda; Wongthongtham, Pornpit; Hussain, Farookh Khadeer

    Ontology plays a vital role in sharing a common understanding of the domain among groups of people and provides terminology interpretable by machines. Recently, ontology has grown and continued to evolve constantly, but there are not many tools to provide an environment to support ontology evolution. This paper introduces a framework to support the management and maintenance leading to the evolution of Ontology by focusing on Software Engineering Ontology. The proposed framework will take into account the users' perspectives on the ontology and keep track of the comments in a formal manner. We propose the use of technology such as Semantic MediaWiki as a means to overcome the aforementioned problems.

  9. An ontology for a Robot Scientist.

    PubMed

    Soldatova, Larisa N; Clare, Amanda; Sparkes, Andrew; King, Ross D

    2006-07-15

    A Robot Scientist is a physically implemented robotic system that can automatically carry out cycles of scientific experimentation. We are commissioning a new Robot Scientist designed to investigate gene function in S. cerevisiae. This Robot Scientist will be capable of initiating >1,000 experiments, and making >200,000 observations a day. Robot Scientists provide a unique test bed for the development of methodologies for the curation and annotation of scientific experiments: because the experiments are conceived and executed automatically by computer, it is possible to completely capture and digitally curate all aspects of the scientific process. This new ability brings with it significant technical challenges. To meet these we apply an ontology driven approach to the representation of all the Robot Scientist's data and metadata. We demonstrate the utility of developing an ontology for our new Robot Scientist. This ontology is based on a general ontology of experiments. The ontology aids the curation and annotating of the experimental data and metadata, and the equipment metadata, and supports the design of database systems to hold the data and metadata. EXPO in XML and OWL formats is at: http://sourceforge.net/projects/expo/. All materials about the Robot Scientist project are available at: http://www.aber.ac.uk/compsci/Research/bio/robotsci/.

  10. Integrating systems biology models and biomedical ontologies

    PubMed Central

    2011-01-01

    Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms. PMID:21835028

  11. Effective Ontology-Based Data Integration

    NASA Astrophysics Data System (ADS)

    Rosati, Riccardo

    The goal of data integration is to provide a uniform access to a set of heterogeneous data sources, freeing the user from the knowledge about where the data are, how they are stored, and how they can be accessed. One of the outcomes of the research work carried out on data integration in the last years is a clear conceptual architecture, comprising a global schema, the source schema, and the mapping between the source and the global schema. In this talk, we present a comprehensive approach to ontology-based data integration. We consider global schemas that are ontologies expressed in OWL, the W3C standard ontology specification language, whereas sources are relations, managed through a data federation tool that wraps the actual data. The mapping language has specific mechanisms for relating values stored at the sources to objects that are instances of concepts in the ontology. By virtue of the careful design that we propose for the various components of a data integration system, answering unions of conjunctive queries can be done through a very efficient technique which reduces this task to standard SQL query evaluation. Finally, we present a management system for ontology-based data integration, called MASTRO-I, which completely implements our approach.

  12. NOA: a novel Network Ontology Analysis method.

    PubMed

    Wang, Jiguang; Huang, Qiang; Liu, Zhi-Ping; Wang, Yong; Wu, Ling-Yun; Chen, Luonan; Zhang, Xiang-Sun

    2011-07-01

    Gene ontology analysis has become a popular and important tool in bioinformatics study, and current ontology analyses are mainly conducted in individual gene or a gene list. However, recent molecular network analysis reveals that the same list of genes with different interactions may perform different functions. Therefore, it is necessary to consider molecular interactions to correctly and specifically annotate biological networks. Here, we propose a novel Network Ontology Analysis (NOA) method to perform gene ontology enrichment analysis on biological networks. Specifically, NOA first defines link ontology that assigns functions to interactions based on the known annotations of joint genes via optimizing two novel indexes 'Coverage' and 'Diversity'. Then, NOA generates two alternative reference sets to statistically rank the enriched functional terms for a given biological network. We compare NOA with traditional enrichment analysis methods in several biological networks, and find that: (i) NOA can capture the change of functions not only in dynamic transcription regulatory networks but also in rewiring protein interaction networks while the traditional methods cannot and (ii) NOA can find more relevant and specific functions than traditional methods in different types of static networks. Furthermore, a freely accessible web server for NOA has been developed at http://www.aporc.org/noa/.

  13. Text-Content-Analysis based on the Syntactic Correlations between Ontologies

    NASA Astrophysics Data System (ADS)

    Tenschert, Axel; Kotsiopoulos, Ioannis; Koller, Bastian

    The work presented in this chapter is concerned with the analysis of semantic knowledge structures, represented in the form of Ontologies, through which Service Level Agreements (SLAs) are enriched with new semantic data. The objective of the enrichment process is to enable SLA negotiation in a way that is much more convenient for a Service Users. For this purpose the deployment of an SLA-Management-System as well as the development of an analyzing procedure for Ontologies is required. This chapter will refer to the BREIN, the FinGrid and the LarKC projects. The analyzing procedure examines the syntactic correlations of several Ontologies whose focus lies in the field of mechanical engineering. A method of analyzing text and content is developed as part of this procedure. In order to so, we introduce a formalism as well as a method for understanding content. The analysis and methods are integrated to an SLA Management System which enables a Service User to interact with the system as a service by negotiating the user requests and including the semantic knowledge. Through negotiation between Service User and Service Provider the analysis procedure considers the user requests by extending the SLAs with semantic knowledge. Through this the economic use of an SLA-Management-System is increased by the enhancement of SLAs with semantic knowledge structures. The main focus of this chapter is the analyzing procedure, respectively the Text-Content-Analysis, which provides the mentioned semantic knowledge structures.

  14. Towards a core ontology for integrating ecological and environmental ontologies to enable improved data interoperability

    NASA Astrophysics Data System (ADS)

    Bowers, S.; Madin, J.; Jones, M.; Schildhauer, M.; Ludaescher, B.

    2007-12-01

    Research in the ecological and environmental sciences increasingly relies on the integration of traditionally small, focused studies to form larger datasets for synthetic analyses. However, a broad range of data types, structures, and semantic subtleties occur in ecological data, making data discovery and integration a difficult and time-consuming task. Our work focuses on capturing the subtleties of scientific data through semantic annotations, which involve linking ecological data to concepts and relationships in domain-specific ontologies, thereby enabling more advanced forms of data discovery and integration. A variety of ontologies related to ecological data are actively being developed, ranging from low-level and highly focused vocabularies to high-level models and classifications. However, as the number of ontologies and their included terms increase, organizing these into a coherent framework useful for data annotation becomes increasingly complex (we note that similar issues have been recognized within the molecular biology and bioinformatics communities). We describe a core ontology model for semantic annotation that provides a structured approach for integrating the growing number of ecology-relevant ontologies. The ontology defines the notion of "scientific observation" as a unifying concept for capturing the basic semantics of ecological data. Observations are distinguished at the level of the entity (e.g., location, time, thing, concept), and characteristics of an entity (e.g., height, name, color) are measured (named or classified) as data. The ontology permits observations to be related via context (such as spatial or temporal containment), further supporting the discovery and automated comparison and alignment (e.g., merging) of heterogeneous data. The core ontology also defines a set of extension points that can be used to either directly build new domain ontologies (as extension ontologies), or to provide a common basis to which existing

  15. The ontology model of FrontCRM framework

    NASA Astrophysics Data System (ADS)

    Budiardjo, Eko K.; Perdana, Wira; Franshisca, Felicia

    2013-03-01

    Adoption and implementation of Customer Relationship Management (CRM) is not merely a technological installation, but the emphasis is more on the application of customer-centric philosophy and culture as a whole. CRM must begin at the level of business strategy, the only level that thorough organizational changes are possible to be done. Changes agenda can be directed to each departmental plans, and supported by information technology. Work processes related to CRM concept include marketing, sales, and services. FrontCRM is developed as framework to guide in identifying business processes related to CRM in which based on the concept of strategic planning approach. This leads to processes and practices identification in every process area related to marketing, sales, and services. The Ontology model presented on this paper by means serves as tools to avoid framework misunderstanding, to define practices systematically within process area and to find CRM software features related to those practices.

  16. An Earthquake Source Ontology for Seismic Hazard Analysis and Ground Motion Simulation

    NASA Astrophysics Data System (ADS)

    Zechar, J. D.; Jordan, T. H.; Gil, Y.; Ratnakar, V.

    2005-12-01

    Representation of the earthquake source is an important element in seismic hazard analysis and earthquake simulations. Source models span a range of conceptual complexity - from simple time-independent point sources to extended fault slip distributions. Further computational complexity arises because the seismological community has established so many source description formats and variations thereof; what this means is that conceptually equivalent source models are often expressed in different ways. Despite the resultant practical difficulties, there exists a rich semantic vocabulary for working with earthquake sources. For these reasons, we feel it is appropriate to create a semantic model of earthquake sources using an ontology, a computer science tool from the field of knowledge representation. Unlike the domain of most ontology work to date, earthquake sources can be described by a very precise mathematical framework. Another uniqueness associated with developing such an ontology is that earthquake sources are often used as computational objects. A seismologist generally wants more than to simply construct a source and have it be well-formed and properly described; additionally, the source will be used for performing calculations. Representation and manipulation of complex mathematical objects presents a challenge to the ontology development community. In order to enable simulations involving many different types of source models, we have completed preliminary development of a seismic point source ontology. The use of an ontology to represent knowledge provides machine interpretability and the ability to validate logical consistency and completeness. Our ontology, encoded using the OWL Web Ontology Language - a standard from the World Wide Web Consortium, contains the conceptual definitions and relationships necessary for source translation services. For example, specification of strike, dip, rake, and seismic moment will automatically translate into a double

  17. The Porifera Ontology (PORO): enhancing sponge systematics with an anatomy ontology.

    PubMed

    Thacker, Robert W; Díaz, Maria Cristina; Kerner, Adeline; Vignes-Lebbe, Régine; Segerdell, Erik; Haendel, Melissa A; Mungall, Christopher J

    2014-01-01

    Porifera (sponges) are ancient basal metazoans that lack organs. They provide insight into key evolutionary transitions, such as the emergence of multicellularity and the nervous system. In addition, their ability to synthesize unusual compounds offers potential biotechnical applications. However, much of the knowledge of these organisms has not previously been codified in a machine-readable way using modern web standards. The Porifera Ontology is intended as a standardized coding system for sponge anatomical features currently used in systematics. The ontology is available from http://purl.obolibrary.org/obo/poro.owl, or from the project homepage http://porifera-ontology.googlecode.com/. The version referred to in this manuscript is permanently available from http://purl.obolibrary.org/obo/poro/releases/2014-03-06/. By standardizing character representations, we hope to facilitate more rapid description and identification of sponge taxa, to allow integration with other evolutionary database systems, and to perform character mapping across the major clades of sponges to better understand the evolution of morphological features. Future applications of the ontology will focus on creating (1) ontology-based species descriptions; (2) taxonomic keys that use the nested terms of the ontology to more quickly facilitate species identifications; and (3) methods to map anatomical characters onto molecular phylogenies of sponges. In addition to modern taxa, the ontology is being extended to include features of fossil taxa.

  18. Using ontology network structure in text mining.

    PubMed

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  19. Ontology-enriched Visualization of Human Anatomy

    SciTech Connect

    Pouchard, LC

    2005-12-20

    The project focuses on the problem of presenting a human anatomical 3D model associated with other types of human systemic information ranging from physiological to anatomical information while navigating the 3D model. We propose a solution that integrates a visual 3D interface and navigation features with the display of structured information contained in an ontology of anatomy where the structures of the human body are formally and semantically linked. The displayed and annotated anatomy serves as a visual entry point into a patient's anatomy, medical indicators and other information. The ontology of medical information provides labeling to the highlighted anatomical parts in the 3D display. Because of the logical organization and links between anatomical objects found in the ontology and associated 3D model, the analysis of a structure by a physician is greatly enhanced. Navigation within the 3D visualization and between this visualization and objects representing anatomical concepts within the model is also featured.

  20. Modularizing Spatial Ontologies for Assisted Living Systems

    NASA Astrophysics Data System (ADS)

    Hois, Joana

    Assisted living systems are intended to support daily-life activities in user homes by automatizing and monitoring behavior of the environment while interacting with the user in a non-intrusive way. The knowledge base of such systems therefore has to define thematically different aspects of the environment mostly related to space, such as basic spatial floor plan information, pieces of technical equipment in the environment and their functions and spatial ranges, activities users can perform, entities that occur in the environment, etc. In this paper, we present thematically different ontologies, each of which describing environmental aspects from a particular perspective. The resulting modular structure allows the selection of application-specific ontologies as necessary. This hides information and reduces complexity in terms of the represented spatial knowledge and reasoning practicability. We motivate and present the different spatial ontologies applied to an ambient assisted living application.

  1. Ontology Driven Piecemeal Development of Smart Spaces

    NASA Astrophysics Data System (ADS)

    Ovaska, Eila

    Software development is facing new challenges due to transformation from product based software engineering towards integration and collaboration based software engineering that embodies high degree of dynamism both at design time and run time. Short time-to-markets require cost reduction by maximizing software reuse; openness for new innovations presumes a flexible innovation platform and agile software development; and user satisfaction assumes high quality in a situation based manner. How to deal with these contradictory requirements in software engineering? The main contribution of this paper is a novel approach that is influenced by business innovation, human centered design, model driven development and ontology oriented design. The approach is called Ontology driven Piecemeal Software Engineering (OPSE). OPSE facilitates incremental software development based on software pieces that follow the design principles defined by means of ontologies. Its key elements are abstraction, aggregation and adaptivity. The approach is intended for and applied to the development of smart spaces.

  2. A Proposed Ontology For Online Healthcare Surveys

    PubMed Central

    Huq, Syed Z; Karras, Bryant T

    2003-01-01

    This paper results from the research efforts of the Clinical Informatics Research Group in building a generalized system for online survey implementation. Key to the success of any generalized survey system is a standard ontology for the differing components of any survey, particularly those sought to be implemented online, over the World Wide Web. In this paper, we introduce the need for generalized survey authoring tools, discuss our methods for elucidating the different components present in many healthcare instruments and classifying them as per existing standards, and later present our proposed ontology for online surveys in the healthcare domain. Next is a more detailed description of the different question types mentioned in this ontology. Finally, we compare some general purpose authoring systems currently available to determine their flexibility in representing these disparate question types (www.cirg.washington.edu/SuML). PMID:14728183

  3. A Posteriori Ontology Engineering for Data-Driven Science

    SciTech Connect

    Gessler, Damian Dg; Joslyn, Cliff A.; Verspoor, Karin M.

    2013-05-28

    Science—and biology in particular—has a rich tradition in categorical knowledge management. This continues today in the generation and use of formal ontologies. Unfortunately, the link between hard data and ontological content is predominately qualitative, not quantitative. The usual approach is to construct ontologies of qualitative concepts, and then annotate the data to the ontologies. This process has seen great value, yet it is laborious, and the success to which ontologies are managing and organizing the full information content of the data is uncertain. An alternative approach is the converse: use the data itself to quantitatively drive ontology creation. Under this model, one generates ontologies at the time they are needed, allowing them to change as more data influences both their topology and their concept space. We outline a combined approach to achieve this, taking advantage of two technologies, the mathematical approach of Formal Concept Analysis (FCA) and the semantic web technologies of the Web Ontology Language (OWL).

  4. Evolution of biomedical ontologies and mappings: Overview of recent approaches.

    PubMed

    Groß, Anika; Pruski, Cédric; Rahm, Erhard

    2016-01-01

    Biomedical ontologies are heavily used to annotate data, and different ontologies are often interlinked by ontology mappings. These ontology-based mappings and annotations are used in many applications and analysis tasks. Since biomedical ontologies are continuously updated dependent artifacts can become outdated and need to undergo evolution as well. Hence there is a need for largely automated approaches to keep ontology-based mappings up-to-date in the presence of evolving ontologies. In this article, we survey current approaches and novel directions in the context of ontology and mapping evolution. We will discuss requirements for mapping adaptation and provide a comprehensive overview on existing approaches. We will further identify open challenges and outline ideas for future developments.

  5. OAE: The Ontology of Adverse Events

    PubMed Central

    2014-01-01

    Background A medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health. Description The Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term ‘adverse event’ denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA. Conclusion OAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of

  6. An ontology of human developmental anatomy

    PubMed Central

    Hunter, Amy; Kaufman, Matthew H; McKay, Angus; Baldock, Richard; Simmen, Martin W; Bard, Jonathan B L

    2003-01-01

    Human developmental anatomy has been organized as structured lists of the major constituent tissues present during each of Carnegie stages 1–20 (E1–E50, ∼8500 anatomically defined tissue items). For each of these stages, the tissues have been organized as a hierarchy in which an individual tissue is catalogued as part of a larger tissue. Such a formal representation of knowledge is known as an ontology and this anatomical ontology can be used in databases to store, organize and search for data associated with the tissues present at each developmental stage. The anatomical data for compiling these hierarchies comes from the literature, from observations on embryos in the Patten Collection (Ann Arbor, MI, USA) and from comparisons with mouse tissues at similar stages of development. The ontology is available in three versions. The first gives hierarchies of the named tissues present at each Carnegie stage (http://www.ana.ed.ac.uk/anatomy/database/humat/) and is intended to help analyse both normal and abnormal human embryos; it carries hyperlinked notes on some ambiguities in the literature that have been clarified through analysing sectioned material. The second contains many additional subsidiary tissue domains and is intended for handling tissue-associated data (e.g. gene-expression) in a database. This version is available at the humat site and at http://genex.hgu.mrc.ac.uk/Resources/intro.html/), and has been designed to be interoperable with the ontology for mouse developmental anatomy, also available at the genex site. The third gives the second version in GO ontology syntax (with standard IDs for each tissue) and can be downloaded from both the genex and the Open Biological Ontology sites (http://obo.sourceforge.net/) PMID:14620375

  7. OAE: The Ontology of Adverse Events.

    PubMed

    He, Yongqun; Sarntivijai, Sirarat; Lin, Yu; Xiang, Zuoshuang; Guo, Abra; Zhang, Shelley; Jagannathan, Desikan; Toldo, Luca; Tao, Cui; Smith, Barry

    2014-01-01

    A medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health. The Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term 'adverse event' denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA. OAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of adverse events and of the factors (e

  8. Food for thought ... A toxicology ontology roadmap.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    Foreign substances can have a dramatic and unpredictable adverse effect on human health. In the development of new therapeutic agents, it is essential that the potential adverse effects of all candidates be identified as early as possible. The field of predictive toxicology strives to profile the potential for adverse effects of novel chemical substances before they occur, both with traditional in vivo experimental approaches and increasingly through the development of in vitro and computational methods which can supplement and reduce the need for animal testing. To be maximally effective, the field needs access to the largest possible knowledge base of previous toxicology findings, and such results need to be made available in such a fashion so as to be interoperable, comparable, and compatible with standard toolkits. This necessitates the development of open, public, computable, and standardized toxicology vocabularies and ontologies so as to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. Such ontology development will support data management, model building, integrated analysis, validation and reporting, including regulatory reporting and alternative testing submission requirements as required by guidelines such as the REACH legislation, leading to new scientific advances in a mechanistically-based predictive toxicology. Numerous existing ontology and standards initiatives can contribute to the creation of a toxicology ontology supporting the needs of predictive toxicology and risk assessment. Additionally, new ontologies are needed to satisfy practical use cases and scenarios where gaps currently exist. Developing and integrating these resources will require a well-coordinated and sustained effort across numerous stakeholders engaged in a public-private partnership. In this communication, we set out a roadmap for the development of an integrated toxicology ontology

  9. Primitive Ontology and the Classical World

    NASA Astrophysics Data System (ADS)

    Allori, Valia

    In this chapter, I present the common structure of quantum theories with a primitive ontology (PO), and discuss in what sense the classical world emerges from quantum theories as understood in this framework. In addition, I argue that the PO approach is better at analyzing the classical limit than the rival wave function ontology approach or any other approach in which the classical world is non-reductively "emergent:" even if the classical limit within this framework needs to be fully developed, the difficulties are technical rather than conceptual, while this is not true for the alternatives.

  10. Practical Applications of the Gene Ontology Resource

    NASA Astrophysics Data System (ADS)

    Huntley, Rachael P.; Dimmer, Emily C.; Apweiler, Rolf

    The Gene Ontology (GO) is a controlled vocabulary that represents knowledge about the functional attributes of gene products in a structured manner and can be used in both computational and human analyses. This vocabulary has been used by diverse curation groups to associate functional information to individual gene products in the form of annotations. GO has proven an invaluable resource for evaluating and interpreting the biological significance of large data sets, enabling researchers to create hypotheses to direct their future research. This chapter provides an overview of the Gene Ontology, how it can be used, and tips on getting the most out of GO analyses.

  11. An ontological view of advanced practice nursing.

    PubMed

    Arslanian-Engoren, Cynthia; Hicks, Frank D; Whall, Ann L; Algase, Donna L

    2005-01-01

    Identifying, developing, and incorporating nursing's unique ontological and epistemological perspective into advanced practice nursing practice places priority on delivering care based on research-derived knowledge. Without a clear distinction of our metatheoretical space, we risk blindly adopting the practice values of other disciplines, which may not necessarily reflect those of nursing. A lack of focus may lead current advanced practice nursing curricula and emerging doctorate of nursing practice programs to mirror the logical positivist paradigm and perspective of medicine. This article presents an ontological perspective for advanced practice nursing education, practice, and research.

  12. Semi-automated ontology generation and evolution

    NASA Astrophysics Data System (ADS)

    Stirtzinger, Anthony P.; Anken, Craig S.

    2009-05-01

    Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural

  13. Ontology-Based Model Of Firm Competitiveness

    NASA Astrophysics Data System (ADS)

    Deliyska, Boryana; Stoenchev, Nikolay

    2010-10-01

    Competitiveness is important characteristics of each business organization (firm, company, corporation etc). It is of great significance for the organization existence and defines evaluation criteria of business success at microeconomical level. Each criterium comprises set of indicators with specific weight coefficients. In the work an ontology-based model of firm competitiveness is presented as a set of several mutually connected ontologies. It would be useful for knowledge structuring, standardization and sharing among experts and software engineers who develop application in the domain. Then the assessment of the competitiveness of various business organizations could be generated more effectively.

  14. Terminology Representation Guidelines for Biomedical Ontologies in the Semantic Web Notations

    PubMed Central

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R.; Wei, Wei-Qi; Chute, Christopher G.

    2012-01-01

    Terminologies and ontologies are increasingly prevalent in health-care and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies.) The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. PMID:23026232

  15. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    PubMed

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability.

  16. CODEX: exploration of semantic changes between ontology versions.

    PubMed

    Hartung, Michael; Gross, Anika; Rahm, Erhard

    2012-03-15

    Life science ontologies substantially change over time to meet the requirements of their users and to include the newest domain knowledge. Thus, an important task is to know what has been modified between two versions of an ontology (diff). This diff should contain all performed changes as compact and understandable as possible. We present CODEX (Complex Ontology Diff Explorer), a tool that allows determining semantic changes between two versions of an ontology, which users can interactively analyze in multiple ways.

  17. The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models

    USGS Publications Warehouse

    Sanders, C.L.

    1995-01-01

    Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.

  18. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  19. Developing Learning Materials Using an Ontology of Mathematical Logic

    ERIC Educational Resources Information Center

    Boyatt, Russell; Joy, Mike

    2012-01-01

    Ontologies describe a body of knowledge and give formal structure to a domain by describing concepts and their relationships. The construction of an ontology provides an opportunity to develop a shared understanding and a consistent vocabulary to be used for a given activity. This paper describes the construction of an ontology for an area of…

  20. The Relationship between User Expertise and Structural Ontology Characteristics

    ERIC Educational Resources Information Center

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  1. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  2. The Relationship between User Expertise and Structural Ontology Characteristics

    ERIC Educational Resources Information Center

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  3. Children's Reasoning about Physics within and across Ontological Kinds.

    ERIC Educational Resources Information Center

    Heyman, Gail D.; Phillips, Ann T.; Gelman, Susan A.

    2003-01-01

    Examined reasoning about physics principles within and across ontological kinds among 5- and 7-year-olds and adults. Found that all age groups tended to appropriately generalize what they learned across ontological kinds. Children assumed that principles learned with reference to one ontological kind were more likely to apply within that kind than…

  4. Ontological Approach to Military Knowledge Modeling and Management

    DTIC Science & Technology

    2004-03-01

    federated search mechanism has to reformulate user queries (expressed using the ontology) in the query languages of the different sources (e.g. SQL...ontologies as a common terminology – Unified query to perform federated search • Query processing – Ontology mapping to sources reformulate queries

  5. Children's Reasoning about Physics within and across Ontological Kinds.

    ERIC Educational Resources Information Center

    Heyman, Gail D.; Phillips, Ann T.; Gelman, Susan A.

    2003-01-01

    Examined reasoning about physics principles within and across ontological kinds among 5- and 7-year-olds and adults. Found that all age groups tended to appropriately generalize what they learned across ontological kinds. Children assumed that principles learned with reference to one ontological kind were more likely to apply within that kind than…

  6. Development of an Ontology for Periodontitis.

    PubMed

    Suzuki, Asami; Takai-Igarashi, Takako; Nakaya, Jun; Tanaka, Hiroshi

    2015-01-01

    In the clinical dentists and periodontal researchers' community, there is an obvious demand for a systems model capable of linking the clinical presentation of periodontitis to underlying molecular knowledge. A computer-readable representation of processes on disease development will give periodontal researchers opportunities to elucidate pathways and mechanisms of periodontitis. An ontology for periodontitis can be a model for integration of large variety of factors relating to a complex disease such as chronic inflammation in different organs accompanied by bone remodeling and immune system disorders, which has recently been referred to as osteoimmunology. Terms characteristic of descriptions related to the onset and progression of periodontitis were manually extracted from 194 review articles and PubMed abstracts by experts in periodontology. We specified all the relations between the extracted terms and constructed them into an ontology for periodontitis. We also investigated matching between classes of our ontology and that of Gene Ontology Biological Process. We developed an ontology for periodontitis called Periodontitis-Ontology (PeriO). The pathological progression of periodontitis is caused by complex, multi-factor interrelationships. PeriO consists of all the required concepts to represent the pathological progression and clinical treatment of periodontitis. The pathological processes were formalized with reference to Basic Formal Ontology and Relation Ontology, which accounts for participants in the processes realized by biological objects such as molecules and cells. We investigated the peculiarity of biological processes observed in pathological progression and medical treatments for the disease in comparison with Gene Ontology Biological Process (GO-BP) annotations. The results indicated that peculiarities of Perio existed in 1) granularity and context dependency of both the conceptualizations, and 2) causality intrinsic to the pathological processes

  7. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    PubMed

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The Ontology of Interactive Art

    ERIC Educational Resources Information Center

    Lopes, Dominic M. McIver

    2001-01-01

    Developments in the art world seem always to keep one step ahead of philosophical attempts to characterize the nature and value of art. A pessimist may conclude that theories of art are doomed to failure. But those more optimistic about the prospects for progress in philosophy may retort that avant-garde art does philosophers a great service. It…

  9. The Ontology of Interactive Art

    ERIC Educational Resources Information Center

    Lopes, Dominic M. McIver

    2001-01-01

    Developments in the art world seem always to keep one step ahead of philosophical attempts to characterize the nature and value of art. A pessimist may conclude that theories of art are doomed to failure. But those more optimistic about the prospects for progress in philosophy may retort that avant-garde art does philosophers a great service. It…

  10. Ontology of Earth's nonlinear dynamic complex systems

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan; Davarpanah, Armita

    2017-04-01

    As a complex system, Earth and its major integrated and dynamically interacting subsystems (e.g., hydrosphere, atmosphere) display nonlinear behavior in response to internal and external influences. The Earth Nonlinear Dynamic Complex Systems (ENDCS) ontology formally represents the semantics of the knowledge about the nonlinear system element (agent) behavior, function, and structure, inter-agent and agent-environment feedback loops, and the emergent collective properties of the whole complex system as the result of interaction of the agents with other agents and their environment. It also models nonlinear concepts such as aperiodic, random chaotic behavior, sensitivity to initial conditions, bifurcation of dynamic processes, levels of organization, self-organization, aggregated and isolated functionality, and emergence of collective complex behavior at the system level. By incorporating several existing ontologies, the ENDCS ontology represents the dynamic system variables and the rules of transformation of their state, emergent state, and other features of complex systems such as the trajectories in state (phase) space (attractor and strange attractor), basins of attractions, basin divide (separatrix), fractal dimension, and system's interface to its environment. The ontology also defines different object properties that change the system behavior, function, and structure and trigger instability. ENDCS will help to integrate the data and knowledge related to the five complex subsystems of Earth by annotating common data types, unifying the semantics of shared terminology, and facilitating interoperability among different fields of Earth science.

  11. Bacterial Virus Ontology; Coordinating across Databases.

    PubMed

    Hulo, Chantal; Masson, Patrick; Toussaint, Ariane; Osumi-Sutherland, David; de Castro, Edouard; Auchincloss, Andrea H; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis; Le Mercier, Philippe

    2017-05-23

    Bacterial viruses, also called bacteriophages, display a great genetic diversity and utilize unique processes for infecting and reproducing within a host cell. All these processes were investigated and indexed in the ViralZone knowledge base. To facilitate standardizing data, a simple ontology of viral life-cycle terms was developed to provide a common vocabulary for annotating data sets. New terminology was developed to address unique viral replication cycle processes, and existing terminology was modified and adapted. Classically, the viral life-cycle is described by schematic pictures. Using this ontology, it can be represented by a combination of successive events: entry, latency, transcription/replication, host-virus interactions and virus release. Each of these parts is broken down into discrete steps. For example enterobacteria phage lambda entry is broken down in: viral attachment to host adhesion receptor, viral attachment to host entry receptor, viral genome ejection and viral genome circularization. To demonstrate the utility of a standard ontology for virus biology, this work was completed by annotating virus data in the ViralZone, UniProtKB and Gene Ontology databases.

  12. Deafblindness, ontological security, and social recognition.

    PubMed

    Danermark, Berth D; Möller, Kerstin

    2008-11-01

    Trust, ontological security, and social recognition are discussed in relation to self-identity among people with acquired deafblindness. To date the phenomenon has not been elaborated in the context of deafblindness. When a person with deafblindness interacts with the social and material environment, the reliability, constancy, and predictability of his or her relations is crucial for maintaining or achieving ontological security or a general and fairly persistent feeling of well-being. When these relations fundamentally change, the impact on ontological security will be very negative. The construction of social recognition through the interaction between the self and others is embodied across three dimensions: at the individual level, at the legal systems level, and at the normative or value level. The relationship between trust and ontological security on the one hand and social recognition on the other hand is discussed. It is argued that these basic processes affecting personality development have to be identified and acknowledged in the interactions people with deafblindness experience. Some implications for the rehabilitation of people with acquired deafblindness are presented and illustrated.

  13. On Static and Dynamic Intuitive Ontologies

    ERIC Educational Resources Information Center

    Hammer, David; Gupta, Ayush; Redish, Edward F.

    2011-01-01

    The authors appreciate Professor Slotta's responding to their critique (Slotta, this issue). For their part, they believe that Professor Slotta has misinterpreted aspects of their position. In this commentary, the authors clarify two particular points. First, they explain their use of "static ontologies," which they maintain applies. Second, they…

  14. An Ontology Representation for Water Bodies

    NASA Astrophysics Data System (ADS)

    Brodaric, B.; Hahmann, T.; Gruninger, M.

    2015-12-01

    The interoperability of hydrological data has been a major concern in recent years, as evident by the maturation of international standards as well as the development of national and international data systems. Notwithstanding the related significant efforts at modeling hydrological entities, there remain unresolved questions about some core entities that impact the design of hydro schemas, ontologies, and similar knowledge models. One such central entity is the water body, which is represented quite heterogeneously in such models, potentially challenging their interoperability. To meet this challenge, we carry out an ontological analysis of the water body entity and propose a new ontological representation for it, as part of a wider initiative into foundational hydro ontology. The representation exhibits the surprising result that a water body is a mereological entity that is essentially grounded in two types of whole-part relations. The nuanced nature of this result has the potential to inform the design of other hydro knowledge models, as well as to foster interoperability between them.

  15. Development of an Ontology for Occupational Exposure

    EPA Science Inventory

    When discussing a scientific domain, the use of a common language is required, particularly when communicating across disciplines. This common language, or ontology, is a prescribed vocabulary and a web of contextual relationships within the vocabulary that describe the given dom...

  16. Modeling biochemical pathways in the gene ontology

    DOE PAGES

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.; ...

    2016-09-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes inmore » the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis.« less

  17. Ontology-Based Administration of Web Directories

    NASA Astrophysics Data System (ADS)

    Horvat, Marko; Gledec, Gordan; Bogunović, Nikola

    Administration of a Web directory and maintenance of its content and the associated structure is a delicate and labor intensive task performed exclusively by human domain experts. Subsequently there is an imminent risk of a directory structures becoming unbalanced, uneven and difficult to use to all except for a few users proficient with the particular Web directory and its domain. These problems emphasize the need to establish two important issues: i) generic and objective measures of Web directories structure quality, and ii) mechanism for fully automated development of a Web directory's structure. In this paper we demonstrate how to formally and fully integrate Web directories with the Semantic Web vision. We propose a set of criteria for evaluation of a Web directory's structure quality. Some criterion functions are based on heuristics while others require the application of ontologies. We also suggest an ontology-based algorithm for construction of Web directories. By using ontologies to describe the semantics of Web resources and Web directories' categories it is possible to define algorithms that can build or rearrange the structure of a Web directory. Assessment procedures can provide feedback and help steer the ontology-based construction process. The issues raised in the article can be equally applied to new and existing Web directories.

  18. Bacterial Virus Ontology; Coordinating across Databases

    PubMed Central

    Hulo, Chantal; Masson, Patrick; Toussaint, Ariane; Osumi-Sutherland, David; de Castro, Edouard; Auchincloss, Andrea H.; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis; Le Mercier, Philippe

    2017-01-01

    Bacterial viruses, also called bacteriophages, display a great genetic diversity and utilize unique processes for infecting and reproducing within a host cell. All these processes were investigated and indexed in the ViralZone knowledge base. To facilitate standardizing data, a simple ontology of viral life-cycle terms was developed to provide a common vocabulary for annotating data sets. New terminology was developed to address unique viral replication cycle processes, and existing terminology was modified and adapted. Classically, the viral life-cycle is described by schematic pictures. Using this ontology, it can be represented by a combination of successive events: entry, latency, transcription/replication, host–virus interactions and virus release. Each of these parts is broken down into discrete steps. For example enterobacteria phage lambda entry is broken down in: viral attachment to host adhesion receptor, viral attachment to host entry receptor, viral genome ejection and viral genome circularization. To demonstrate the utility of a standard ontology for virus biology, this work was completed by annotating virus data in the ViralZone, UniProtKB and Gene Ontology databases. PMID:28545254

  19. ICEPO: the ion channel electrophysiology ontology.

    PubMed

    Hinard, V; Britan, A; Rougier, J S; Bairoch, A; Abriel, H; Gaudet, P

    2016-01-01

    Ion channels are transmembrane proteins that selectively allow ions to flow across the plasma membrane and play key roles in diverse biological processes. A multitude of diseases, called channelopathies, such as epilepsies, muscle paralysis, pain syndromes, cardiac arrhythmias or hypoglycemia are due to ion channel mutations. A wide corpus of literature is available on ion channels, covering both their functions and their roles in disease. The research community needs to access this data in a user-friendly, yet systematic manner. However, extraction and integration of this increasing amount of data have been proven to be difficult because of the lack of a standardized vocabulary that describes the properties of ion channels at the molecular level. To address this, we have developed Ion Channel ElectroPhysiology Ontology (ICEPO), an ontology that allows one to annotate the electrophysiological parameters of the voltage-gated class of ion channels. This ontology is based on a three-state model of ion channel gating describing the three conformations/states that an ion channel can adopt: closed, open and inactivated. This ontology supports the capture of voltage-gated ion channel electrophysiological data from the literature in a structured manner and thus enables other applications such as querying and reasoning tools. Here, we present ICEPO (ICEPO ftp site:ftp://ftp.nextprot.org/pub/current_release/controlled_vocabularies/), as well as examples of its use. © The Author(s) 2016. Published by Oxford University Press.

  20. Ontological Performance: Bodies, Identities and Learning.

    ERIC Educational Resources Information Center

    Beckett, David; Morris, Gayle

    2001-01-01

    Within the framework of the body as a significant site for learning, case studies of staff in an eldercare facility and of adult learners of English as a second language are used to support the claim of ontological performance (practical embodied actions) as a way to approach adult learning for and at work. (Contains 38 references.) (SK)

  1. Production Determines Category: An Ontology of Art

    ERIC Educational Resources Information Center

    Weh, Michael

    2010-01-01

    It is a mainstream view within the ontology of art that there are singular as well as multiple artworks, but it is also a view that is contested. In this article, the author investigates whether the singular/multiple distinction can be sustained and argues for a new way to determine the category to which an artwork belongs. The author stresses…

  2. On Static and Dynamic Intuitive Ontologies

    ERIC Educational Resources Information Center

    Hammer, David; Gupta, Ayush; Redish, Edward F.

    2011-01-01

    The authors appreciate Professor Slotta's responding to their critique (Slotta, this issue). For their part, they believe that Professor Slotta has misinterpreted aspects of their position. In this commentary, the authors clarify two particular points. First, they explain their use of "static ontologies," which they maintain applies. Second, they…

  3. CSEO – the Cigarette Smoke Exposure Ontology

    PubMed Central

    2014-01-01

    Background In the past years, significant progress has been made to develop and use experimental settings for extensive data collection on tobacco smoke exposure and tobacco smoke exposure-associated diseases. Due to the growing number of such data, there is a need for domain-specific standard ontologies to facilitate the integration of tobacco exposure data. Results The CSEO (version 1.0) is composed of 20091 concepts. The ontology in its current form is able to capture a wide range of cigarette smoke exposure concepts within the knowledge domain of exposure science with a reasonable sensitivity and specificity. Moreover, it showed a promising performance when used to answer domain expert questions. The CSEO complies with standard upper-level ontologies and is freely accessible to the scientific community through a dedicated wiki at https://publicwiki-01.fraunhofer.de/CSEO-Wiki/index.php/Main_Page. Conclusions The CSEO has potential to become a widely used standard within the academic and industrial community. Mainly because of the emerging need of systems toxicology to controlled vocabularies and also the lack of suitable ontologies for this domain, the CSEO prepares the ground for integrative systems-based research in the exposure science. PMID:25093069

  4. Development of an Ontology for Occupational Exposure

    EPA Science Inventory

    When discussing a scientific domain, the use of a common language is required, particularly when communicating across disciplines. This common language, or ontology, is a prescribed vocabulary and a web of contextual relationships within the vocabulary that describe the given dom...

  5. Interoperability between phenotype and anatomy ontologies

    PubMed Central

    Hoehndorf, Robert; Oellrich, Anika; Rebholz-Schuhmann, Dietrich

    2010-01-01

    Motivation: Phenotypic information is important for the analysis of the molecular mechanisms underlying disease. A formal ontological representation of phenotypic information can help to identify, interpret and infer phenotypic traits based on experimental findings. The methods that are currently used to represent data and information about phenotypes fail to make the semantics of the phenotypic trait explicit and do not interoperate with ontologies of anatomy and other domains. Therefore, valuable resources for the analysis of phenotype studies remain unconnected and inaccessible to automated analysis and reasoning. Results: We provide a framework to formalize phenotypic descriptions and make their semantics explicit. Based on this formalization, we provide the means to integrate phenotypic descriptions with ontologies of other domains, in particular anatomy and physiology. We demonstrate how our framework leads to the capability to represent disease phenotypes, perform powerful queries that were not possible before and infer additional knowledge. Availability: http://bioonto.de/pmwiki.php/Main/PheneOntology Contact: rh497@cam.ac.uk PMID:20971987

  6. Modeling biochemical pathways in the gene ontology

    PubMed Central

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.; Mungall, Christopher J.; Renedo, Nikolai; Blake, Judith A.

    2016-01-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes in the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis. PMID:27589964

  7. In Defense of Chi's Ontological Incompatibility Hypothesis

    ERIC Educational Resources Information Center

    Slotta, James D.

    2011-01-01

    This article responds to an article by A. Gupta, D. Hammer, and E. F. Redish (2010) that asserts that M. T. H. Chi's (1992, 2005) hypothesis of an "ontological commitment" in conceptual development is fundamentally flawed. In this article, I argue that Chi's theoretical perspective is still very much intact and that the critique offered by Gupta…

  8. Ontology for cell-based geographic information

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Huang, Lina; Lu, Xinhai

    2009-10-01

    Inter-operability is a key notion in geographic information science (GIS) for the sharing of geographic information (GI). That requires a seamless translation among different information sources. Ontology is enrolled in GI discovery to settle the semantic conflicts for its natural language appearance and logical hierarchy structure, which are considered to be able to provide better context for both human understanding and machine cognition in describing the location and relationships in the geographic world. However, for the current, most studies on field ontology are deduced from philosophical theme and not applicable for the raster expression in GIS-which is a kind of field-like phenomenon but does not physically coincide to the general concept of philosophical field (mostly comes from the physics concepts). That's why we specifically discuss the cell-based GI ontology in this paper. The discussion starts at the investigation of the physical characteristics of cell-based raster GI. Then, a unified cell-based GI ontology framework for the recognition of the raster objects is introduced, from which a conceptual interface for the connection of the human epistemology and the computer world so called "endurant-occurrant window" is developed for the better raster GI discovery and sharing.

  9. Mapping between the OBO and OWL ontology languages

    PubMed Central

    2011-01-01

    Background Ontologies are commonly used in biomedicine to organize concepts to describe domains such as anatomies, environments, experiment, taxonomies etc. NCBO BioPortal currently hosts about 180 different biomedical ontologies. These ontologies have been mainly expressed in either the Open Biomedical Ontology (OBO) format or the Web Ontology Language (OWL). OBO emerged from the Gene Ontology, and supports most of the biomedical ontology content. In comparison, OWL is a Semantic Web language, and is supported by the World Wide Web consortium together with integral query languages, rule languages and distributed infrastructure for information interchange. These features are highly desirable for the OBO content as well. A convenient method for leveraging these features for OBO ontologies is by transforming OBO ontologies to OWL. Results We have developed a methodology for translating OBO ontologies to OWL using the organization of the Semantic Web itself to guide the work. The approach reveals that the constructs of OBO can be grouped together to form a similar layer cake. Thus we were able to decompose the problem into two parts. Most OBO constructs have easy and obvious equivalence to a construct in OWL. A small subset of OBO constructs requires deeper consideration. We have defined transformations for all constructs in an effort to foster a standard common mapping between OBO and OWL. Our mapping produces OWL-DL, a Description Logics based subset of OWL with desirable computational properties for efficiency and correctness. Our Java implementation of the mapping is part of the official Gene Ontology project source. Conclusions Our transformation system provides a lossless roundtrip mapping for OBO ontologies, i.e. an OBO ontology may be translated to OWL and back without loss of knowledge. In addition, it provides a roadmap for bridging the gap between the two ontology languages in order to enable the use of ontology content in a language independent manner

  10. Mapping between the OBO and OWL ontology languages.

    PubMed

    Tirmizi, Syed Hamid; Aitken, Stuart; Moreira, Dilvan A; Mungall, Chris; Sequeda, Juan; Shah, Nigam H; Miranker, Daniel P

    2011-03-07

    Ontologies are commonly used in biomedicine to organize concepts to describe domains such as anatomies, environments, experiment, taxonomies etc. NCBO BioPortal currently hosts about 180 different biomedical ontologies. These ontologies have been mainly expressed in either the Open Biomedical Ontology (OBO) format or the Web Ontology Language (OWL). OBO emerged from the Gene Ontology, and supports most of the biomedical ontology content. In comparison, OWL is a Semantic Web language, and is supported by the World Wide Web consortium together with integral query languages, rule languages and distributed infrastructure for information interchange. These features are highly desirable for the OBO content as well. A convenient method for leveraging these features for OBO ontologies is by transforming OBO ontologies to OWL. We have developed a methodology for translating OBO ontologies to OWL using the organization of the Semantic Web itself to guide the work. The approach reveals that the constructs of OBO can be grouped together to form a similar layer cake. Thus we were able to decompose the problem into two parts. Most OBO constructs have easy and obvious equivalence to a construct in OWL. A small subset of OBO constructs requires deeper consideration. We have defined transformations for all constructs in an effort to foster a standard common mapping between OBO and OWL. Our mapping produces OWL-DL, a Description Logics based subset of OWL with desirable computational properties for efficiency and correctness. Our Java implementation of the mapping is part of the official Gene Ontology project source. Our transformation system provides a lossless roundtrip mapping for OBO ontologies, i.e. an OBO ontology may be translated to OWL and back without loss of knowledge. In addition, it provides a roadmap for bridging the gap between the two ontology languages in order to enable the use of ontology content in a language independent manner.

  11. Bridging the gap between data acquisition and inference ontologies: toward ontology-based link discovery

    NASA Astrophysics Data System (ADS)

    Goldstein, Michel L.; Morris, Steven A.; Yen, Gary G.

    2003-09-01

    Bridging the gap between low level ontologies used for data acquisition and high level ontologies used for inference is essential to enable the discovery of high-level links between low-level entities. This is of utmost importance in many applications, where the semantic distance between the observable evidence and the target relations is large. Examples of these applications would be detection of terrorist activity, crime analysis, and technology monitoring, among others. Currently this inference gap has been filled by expert knowledge. However, with the increase of the data and system size, it has become too costly to perform such manual inference. This paper proposes a semi-automatic system to bridge the inference gap using network correlation methods, similar to Bayesian Belief Networks, combined with hierarchical clustering, to group and organize data so that experts can observe and build the inference gap ontologies quickly and efficiently, decreasing the cost of this labor-intensive process. A simple application of this method is shown here, where the co-author collaboration structure ontology is inferred from the analysis of a collection of journal publications on the subject of anthrax. This example uncovers a co-author collaboration structures (a well defined ontology) from a scientific publication dataset (also a well defined ontology). Nevertheless, the evidence of author collaboration is poorly defined, requiring the use of evidence from keywords, citations, publication dates, and paper co-authorship. The proposed system automatically suggests candidate collaboration group patterns for evaluation by experts. Using an intuitive graphic user interface, these experts identify, confirm and refine the proposed ontologies and add them to the ontology database to be used in subsequent processes.

  12. Dual lookup table algorithm: an enhanced method of displaying 16-bit gray-scale images on 8-bit RGB graphic systems.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    Most digital radiologic images have an extended contrast range of 9 to 13 bits, and are stored in memory and disk as 16-bit integers. Consequently, it is difficult to view such images on computers with 8-bit red-green-blue (RGB) graphic systems. Two approaches have traditionally been used: (1) perform a one-time conversion of the 16-bit image data to 8-bit gray-scale data, and then adjust the brightness and contrast of the image by manipulating the color palette (palette animation); and (2) use a software lookup table to interactively convert the 16-bit image data to 8-bit gray-scale values with different window width and window level parameters. The first method can adjust image appearance in real time, but some image features may not be visible because of the lack of access to the full contrast range of the image and any region of interest measurements may be inaccurate. The second method allows "windowing" and "leveling" through the full contrast range of the image, but there is a delay after each adjustment that some users may find objectionable. We describe a method that combines palette animation and the software lookup table conversion method that optimizes the changes in image contrast and brightness on computers with standard 8-bit RGB graphic hardware--the dual lookup table algorithm. This algorithm links changes in the window/level control to changes in image contrast and brightness via palette animation.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Ontological interpretation of biomedical database content.

    PubMed

    Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan

    2017-06-26

    Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is

  14. Cross-Ontology multi-level association rule mining in the Gene Ontology.

    PubMed

    Manda, Prashanti; Ozkan, Seval; Wang, Hui; McCarthy, Fiona; Bridges, Susan M

    2012-01-01

    The Gene Ontology (GO) has become the internationally accepted standard for representing function, process, and location aspects of gene products. The wealth of GO annotation data provides a valuable source of implicit knowledge of relationships among these aspects. We describe a new method for association rule mining to discover implicit co-occurrence relationships across the GO sub-ontologies at multiple levels of abstraction. Prior work on association rule mining in the GO has concentrated on mining knowledge at a single level of abstraction and/or between terms from the same sub-ontology. We have developed a bottom-up generalization procedure called Cross-Ontology Data Mining-Level by Level (COLL) that takes into account the structure and semantics of the GO, generates generalized transactions from annotation data and mines interesting multi-level cross-ontology association rules. We applied our method on publicly available chicken and mouse GO annotation datasets and mined 5368 and 3959 multi-level cross ontology rules from the two datasets respectively. We show that our approach discovers more and higher quality association rules from the GO as evaluated by biologists in comparison to previously published methods. Biologically interesting rules discovered by our method reveal unknown and surprising knowledge about co-occurring GO terms.

  15. A histological ontology of the human cardiovascular system.

    PubMed

    Mazo, Claudia; Salazar, Liliana; Corcho, Oscar; Trujillo, Maria; Alegre, Enrique

    2017-10-02

    In this paper, we describe a histological ontology of the human cardiovascular system developed in collaboration among histology experts and computer scientists. The histological ontology is developed following an existing methodology using Conceptual Models (CMs) and validated using OOPS!, expert evaluation with CMs, and how accurately the ontology can answer the Competency Questions (CQ). It is publicly available at http://bioportal.bioontology.org/ontologies/HO and https://w3id.org/def/System . The histological ontology is developed to support complex tasks, such as supporting teaching activities, medical practices, and bio-medical research or having natural language interactions.

  16. An ontology for description of drug discovery investigations.

    PubMed

    Qi, Da; King, Ross D; Hopkins, Andrew L; Bickerton, G Richard J; Soldatova, Larisa N

    2010-03-25

    The paper presents an ontology for the description of Drug Discovery Investigation (DDI).This has been developed through the use of a Robot Scientist "Eve", and in consultation with industry. DDI aims to define the principle entities and the relations in the research and development phase of the drug discovery pipeline. DDI is highly transferable and extendable due to its adherence to accepted standards, and compliance with existing ontology resources. This enables DDI to be integrated with such related ontologies as the Vaccine Ontology, the Advancing Clinico-Genomic Trials on Cancer Master Ontology, etc. DDI is available at http://purl.org/ddi/wikipedia or http://purl.org/ddi/home.

  17. Methodology to build medical ontology from textual resources.

    PubMed

    Baneyx, Audrey; Charlet, Jean; Jaulent, Marie-Christine

    2006-01-01

    In the medical field, it is now established that the maintenance of unambiguous thesauri goes through ontologies. Our research task is to help pneumologists code acts and diagnoses with a software that represents medical knowledge through a domain ontology. In this paper, we describe our general methodology aimed at knowledge engineers in order to build various types of medical ontologies based on terminology extraction from texts. The hypothesis is to apply natural language processing tools to textual patient discharge summaries to develop the resources needed to build an ontology in pneumology. Results indicate that the joint use of distributional analysis and lexico-syntactic patterns performed satisfactorily for building such ontologies.

  18. Exploring biomedical ontology mappings with graph theory methods.

    PubMed

    Kocbek, Simon; Kim, Jin-Dong

    2017-01-01

    In the era of semantic web, life science ontologies play an important role in tasks such as annotating biological objects, linking relevant data pieces, and verifying data consistency. Understanding ontology structures and overlapping ontologies is essential for tasks such as ontology reuse and development. We present an exploratory study where we examine structure and look for patterns in BioPortal, a comprehensive publicly available repository of live science ontologies. We report an analysis of biomedical ontology mapping data over time. We apply graph theory methods such as Modularity Analysis and Betweenness Centrality to analyse data gathered at five different time points. We identify communities, i.e., sets of overlapping ontologies, and define similar and closest communities. We demonstrate evolution of identified communities over time and identify core ontologies of the closest communities. We use BioPortal project and category data to measure community coherence. We also validate identified communities with their mutual mentions in scientific literature. With comparing mapping data gathered at five different time points, we identified similar and closest communities of overlapping ontologies, and demonstrated evolution of communities over time. Results showed that anatomy and health ontologies tend to form more isolated communities compared to other categories. We also showed that communities contain all or the majority of ontologies being used in narrower projects. In addition, we identified major changes in mapping data after migration to BioPortal Version 4.

  19. Exploring biomedical ontology mappings with graph theory methods

    PubMed Central

    2017-01-01

    Background In the era of semantic web, life science ontologies play an important role in tasks such as annotating biological objects, linking relevant data pieces, and verifying data consistency. Understanding ontology structures and overlapping ontologies is essential for tasks such as ontology reuse and development. We present an exploratory study where we examine structure and look for patterns in BioPortal, a comprehensive publicly available repository of live science ontologies. Methods We report an analysis of biomedical ontology mapping data over time. We apply graph theory methods such as Modularity Analysis and Betweenness Centrality to analyse data gathered at five different time points. We identify communities, i.e., sets of overlapping ontologies, and define similar and closest communities. We demonstrate evolution of identified communities over time and identify core ontologies of the closest communities. We use BioPortal project and category data to measure community coherence. We also validate identified communities with their mutual mentions in scientific literature. Results With comparing mapping data gathered at five different time points, we identified similar and closest communities of overlapping ontologies, and demonstrated evolution of communities over time. Results showed that anatomy and health ontologies tend to form more isolated communities compared to other categories. We also showed that communities contain all or the majority of ontologies being used in narrower projects. In addition, we identified major changes in mapping data after migration to BioPortal Version 4. PMID:28265499

  20. From Information Society to Knowledge Society: The Ontology Issue

    NASA Astrophysics Data System (ADS)

    Roche, Christophe

    2002-09-01

    Information society, virtual enterprise, e-business rely more and more on communication and knowledge sharing between heterogeneous actors. But, no communication is possible, and all the more so no co-operation or collaboration, if those actors do not share the same or at least a compatible meaning for the terms they use. Ontology, understood as an agreed vocabulary of common terms and meanings, is a solution to that problem. Nevertheless, although there is quite a lot of experience in using ontologies, several barriers remain which stand against a real use of ontology. As a matter of fact, it is very difficult to build, reuse and share ontologies. We claim that the ontology problem requires a multidisciplinary approach based on sound epistemological, logical and linguistic principles. This article presents the Ontological Knowledge Station (OK Station©), a software environment for building and using ontologies which relies on such principles. The OK Station is currently being used in several industrial applications.

  1. An Ontological Approach to Representing and Reasoning about Events in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya

    2013-04-01

    While observations are fed into the Sensor Web through a growing number of environmental sensors, the challenge is to infer information about geographic events they reflect. For example, we may ask what the measurements mean when a service compiles hourly wind speeds from different providers. The service should perhaps include meaningful descriptions than just the measurements; for instance, whether the wind occurring at a particular site is nearly calm or reflects a windstorm. Similarly, we may want to know the intensity of a snowfall occurrence from a series of visibility measurements supplied by a visibility sensor. A systematic approach representing domain knowledge is vital when reasoning about events at the conceptual level. A description of how one gets from observations to inferred events must be expressed. Environmental models usually capture such information. Nonetheless, they jeopardize transparency; the information contained within these models is implicit, limited to domain experts, and hard to acquire or manipulate. The formal specifications in the Semantic Sensor Web primarily describe sensors and observations; they do not describe information concerning geographic events. Existing event-oriented ontologies represent common concepts concerning events, e.g., participant, time, location and relations between events. Nevertheless, the event-of-interest is not explicitly associated with sensing concepts such as observation event, sensor and result. This paper delivers an ontology to formally capture the relations between observations and geographic events. The ontology constitutes common building blocks for constructing application ontologies that account for inferences of the former from the latter. The formal vocabularies are exploited with a rule-based mechanism to support inferences of events from in-situ observations. The paper also demonstrates how these vocabularies are used to formulate symbolic spatio-temporal queries in the Sensor Web. A use

  2. A UML profile for the OBO relation ontology

    PubMed Central

    2012-01-01

    Background Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions The use of an established and well-known graphical language in the development of biomedical ontologies provides a more

  3. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    PubMed

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web.

  4. PAV ontology: provenance, authoring and versioning

    PubMed Central

    2013-01-01

    Background Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as Dublin Core Terms (DC Terms) and the W3C Provenance Ontology (PROV-O) are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. In particular, to track authoring and versioning information of web resources, PROV-O provides a basic methodology but not any specific classes and properties for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. Results We present the Provenance, Authoring and Versioning ontology (PAV, namespace http://purl.org/pav/): a lightweight ontology for capturing “just enough” descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the W3C PROV-O ontology to support broader interoperability. Method The initial design of the PAV ontology was driven by requirements from the AlzSWAN project with further requirements incorporated later from other projects detailed in this paper. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. Discussion

  5. Developing an Ontology for Ocean Biogeochemistry Data

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Allison, M. D.; Groman, R. C.; West, P.; Zednik, S.; Maffei, A. R.

    2010-12-01

    Semantic Web technologies offer great promise for enabling new and better scientific research. However, significant challenges must be met before the promise of the Semantic Web can be realized for a discipline as diverse as oceanography. Evolving expectations for open access to research data combined with the complexity of global ecosystem science research themes present a significant challenge, and one that is best met through an informatics approach. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is funded by the National Science Foundation Division of Ocean Sciences to work with ocean biogeochemistry researchers to improve access to data resulting from their respective programs. In an effort to improve data access, BCO-DMO staff members are collaborating with researchers from the Tetherless World Constellation (Rensselaer Polytechnic Institute) to develop an ontology that formally describes the concepts and relationships in the data managed by the BCO-DMO. The project required transforming a legacy system of human-readable, flat files of metadata to well-ordered controlled vocabularies to a fully developed ontology. To improve semantic interoperability, terms from the BCO-DMO controlled vocabularies are being mapped to controlled vocabulary terms adopted by other oceanographic data management organizations. While the entire process has proven to be difficult, time-consuming and labor-intensive, the work has been rewarding and is a necessary prerequisite for the eventual incorporation of Semantic Web tools. From the beginning of the project, development of the ontology has been guided by a use case based approach. The use cases were derived from data access related requests received from members of the research community served by the BCO-DMO. The resultant ontology satisfies the requirements of the use cases and reflects the information stored in the metadata database. The BCO-DMO metadata database currently contains information that

  6. PAV ontology: provenance, authoring and versioning.

    PubMed

    Ciccarese, Paolo; Soiland-Reyes, Stian; Belhajjame, Khalid; Gray, Alasdair Jg; Goble, Carole; Clark, Tim

    2013-11-22

    Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as Dublin Core Terms (DC Terms) and the W3C Provenance Ontology (PROV-O) are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. In particular, to track authoring and versioning information of web resources, PROV-O provides a basic methodology but not any specific classes and properties for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. We present the Provenance, Authoring and Versioning ontology (PAV, namespace http://purl.org/pav/): a lightweight ontology for capturing "just enough" descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the W3C PROV-O ontology to support broader interoperability. The initial design of the PAV ontology was driven by requirements from the AlzSWAN project with further requirements incorporated later from other projects detailed in this paper. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. We analyze and compare PAV with related

  7. Automated compound classification using a chemical ontology

    PubMed Central

    2012-01-01

    Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate

  8. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  9. An Approach to Information Management for AIR7000 with Metadata and Ontologies

    DTIC Science & Technology

    2009-10-01

    information management components of maritime patrol and response mandate the effective use of metadata. We then propose an approach based on Semantic Technologies including the Resource Description Framework (RDF) and Upper Ontologies, for the implementation of metadata based dissemination services for AIR 7000. A preliminary architecture is proposed. While the architecture is not yet operational, it highlights the challenges that need to be overcome in any solution to the information management tasks of AIR 7000, and provides a possible form for the

  10. An OGSA Middleware for managing medical images using ontologies.

    PubMed

    Espert, Ignacio Blanquer; Garcáa, Vicente Hernández; Quilis, J Damià Segrelles

    2005-10-01

    This article presents a Middleware based on Grid Technologies that addresses the problem of sharing, transferring and processing DICOM medical images in a distributed environment using an ontological schema to create virtual communities and to define common targets. It defines a distributed storage that builds-up virtual repositories integrating different individual image repositories providing global searching, progressive transmission, automatic encryption and pseudo-anonimisation and a link to remote processing services. Users from a Virtual Organisation can share the cases that are relevant for their communities or research areas, epidemiological studies or even deeper analysis of complex individual cases. Software architecture has been defined for solving the problems that has been exposed before. Briefly, the architecture comprises five layers (from the more physical layer to the more logical layer) based in Grid Technologies. The lowest level layers (Core Middleware Layer and Server Services sc layer) are composed of Grid Services that implement the global managing of resources. The Middleware Components Layer provides a transparent view of the Grid environment and it has been the main objective of this work. Finally, the highest layer (the Application Layer) comprises the applications, and a simple application has been implemented for testing the components developed in the Components Middleware Layer. Other side-results of this work are the services developed in the Middleware Components Layer for managing DICOM images, creating virtual DICOM storages, progressive transmission, automatic encryption and pseudo-anonimisation depending on the ontologies. Other results, such as the Grid Services developed in the lowest layers, are also described in this article. Finally a brief performance analysis and several snapshots from the applications developed are shown. The performance analysis proves that the components developed in this work provide image processing

  11. Construction of ontology augmented networks for protein complex prediction.

    PubMed

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian

    2013-01-01

    Protein complexes are of great importance in understanding the principles of cellular organization and function. The increase in available protein-protein interaction data, gene ontology and other resources make it possible to develop computational methods for protein complex prediction. Most existing methods focus mainly on the topological structure of protein-protein interaction networks, and largely ignore the gene ontology annotation information. In this article, we constructed ontology augmented networks with protein-protein interaction data and gene ontology, which effectively unified the topological structure of protein-protein interaction networks and the similarity of gene ontology annotations into unified distance measures. After constructing ontology augmented networks, a novel method (clustering based on ontology augmented networks) was proposed to predict protein complexes, which was capable of taking into account the topological structure of the protein-protein interaction network, as well as the similarity of gene ontology annotations. Our method was applied to two different yeast protein-protein interaction datasets and predicted many well-known complexes. The experimental results showed that (i) ontology augmented networks and the unified distance measure can effectively combine the structure closeness and gene ontology annotation similarity; (ii) our method is valuable in predicting protein complexes and has higher F1 and accuracy compared to other competing methods.

  12. A UML profile for the OBO relation ontology.

    PubMed

    Guardia, Gabriela D A; Vêncio, Ricardo Z N; de Farias, Cléver R G

    2012-01-01

    Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain.

  13. Modular Ontology Techniques and their Applications in the Biomedical Domain.

    PubMed

    Pathak, Jyotishman; Johnson, Thomas M; Chute, Christopher G

    2008-08-05

    In the past several years, various ontologies and terminologies such as the Gene Ontology have been developed to enable interoperability across multiple diverse medical information systems. They provide a standard way of representing terms and concepts thereby supporting easy transmission and interpretation of data for various applications. However, with their growing utilization, not only has the number of available ontologies increased considerably, but they are also becoming larger and more complex to manage. Toward this end, a growing body of work is emerging in the area of modular ontologies where the emphasis is on either extracting and managing "modules" of an ontology relevant to a particular application scenario (ontology decomposition) or developing them independently and integrating into a larger ontology (ontology composition). In this paper, we investigate state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as well-studied graph theories. We analyze and compare how such approaches can be leveraged in developing tools and applications in the biomedical domain. We conclude by highlighting some of the limitations of the modular ontology formalisms and put forward additional requirements to steer their future development.

  14. Quality control for terms and definitions in ontologies and taxonomies

    PubMed Central

    Köhler, Jacob; Munn, Katherine; Rüegg, Alexander; Skusa, Andre; Smith, Barry

    2006-01-01

    Background Ontologies and taxonomies are among the most important computational resources for molecular biology and bioinformatics. A series of recent papers has shown that the Gene Ontology (GO), the most prominent taxonomic resource in these fields, is marked by flaws of certain characteristic types, which flow from a failure to address basic ontological principles. As yet, no methods have been proposed which would allow ontology curators to pinpoint flawed terms or definitions in ontologies in a systematic way. Results We present computational methods that automatically identify terms and definitions which are defined in a circular or unintelligible way. We further demonstrate the potential of these methods by applying them to isolate a subset of 6001 problematic GO terms. By automatically aligning GO with other ontologies and taxonomies we were able to propose alternative synonyms and definitions for some of these problematic terms. This allows us to demonstrate that these other resources do not contain definitions superior to those supplied by GO. Conclusion Our methods provide reliable indications of the quality of terms and definitions in ontologies and taxonomies. Further, they are well suited to assist ontology curators in drawing their attention to those terms that are ill-defined. We have further shown the limitations of ontology mapping and alignment in assisting ontology curators in rectifying problems, thus pointing to the need for manual curation. PMID:16623942

  15. A Cognitive Support Framework for Ontology Mapping

    NASA Astrophysics Data System (ADS)

    Falconer, Sean M.; Storey, Margaret-Anne

    Ontology mapping is the key to data interoperability in the semantic web. This problem has received a lot of research attention, however, the research emphasis has been mostly devoted to automating the mapping process, even though the creation of mappings often involve the user. As industry interest in semantic web technologies grows and the number of widely adopted semantic web applications increases, we must begin to support the user. In this paper, we combine data gathered from background literature, theories of cognitive support and decision making, and an observational case study to propose a theoretical framework for cognitive support in ontology mapping tools. We also describe a tool called CogZ that is based on this framework.

  16. Mining Gene Ontology Data with AGENDA.

    PubMed

    Ovezmyradov, Guvanch; Lu, Qianhao; Göpfert, Martin C

    2012-01-01

    The Gene Ontology (GO) initiative is a collaborative effort that uses controlled vocabularies for annotating genetic information. We here present AGENDA (Application for mining Gene Ontology Data), a novel web-based tool for accessing the GO database. AGENDA allows the user to simultaneously retrieve and compare gene lists linked to different GO terms in diverse species using batch queries, facilitating comparative approaches to genetic information. The web-based application offers diverse search options and allows the user to bookmark, visualize, and download the results. AGENDA is an open source web-based application that is freely available for non-commercial use at the project homepage. URL: http://sourceforge.net/projects/bioagenda.

  17. Quality of computationally inferred gene ontology annotations.

    PubMed

    Skunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe

    2012-05-01

    Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon-an important outcome given that >98% of all annotations are inferred without direct curation.

  18. Using ontologies to describe mouse phenotypes

    PubMed Central

    Gkoutos, Georgios V; Green, Eain CJ; Mallon, Ann-Marie; Hancock, John M; Davidson, Duncan

    2005-01-01

    The mouse is an important model of human genetic disease. Describing phenotypes of mutant mice in a standard, structured manner that will facilitate data mining is a major challenge for bioinformatics. Here we describe a novel, compositional approach to this problem which combines core ontologies from a variety of sources. This produces a framework with greater flexibility, power and economy than previous approaches. We discuss some of the issues this approach raises. PMID:15642100

  19. An ontology-driven, diagnostic modeling system.

    PubMed

    Haug, Peter J; Ferraro, Jeffrey P; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Dean, Nathan; Jones, Jason

    2013-06-01

    To present a system that uses knowledge stored in a medical ontology to automate the development of diagnostic decision support systems. To illustrate its function through an example focused on the development of a tool for diagnosing pneumonia. We developed a system that automates the creation of diagnostic decision-support applications. It relies on a medical ontology to direct the acquisition of clinic data from a clinical data warehouse and uses an automated analytic system to apply a sequence of machine learning algorithms that create applications for diagnostic screening. We refer to this system as the ontology-driven diagnostic modeling system (ODMS). We tested this system using samples of patient data collected in Salt Lake City emergency rooms and stored in Intermountain Healthcare's enterprise data warehouse. The system was used in the preliminary development steps of a tool to identify patients with pneumonia in the emergency department. This tool was compared with a manually created diagnostic tool derived from a curated dataset. The manually created tool is currently in clinical use. The automatically created tool had an area under the receiver operating characteristic curve of 0.920 (95% CI 0.916 to 0.924), compared with 0.944 (95% CI 0.942 to 0.947) for the manually created tool. Initial testing of the ODMS demonstrates promising accuracy for the highly automated results and illustrates the route to model improvement. The use of medical knowledge, embedded in ontologies, to direct the initial development of diagnostic computing systems appears feasible.

  20. A Uniform Ontology for Software Interfaces

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  1. Defining functional distances over Gene Ontology

    PubMed Central

    del Pozo, Angela; Pazos, Florencio; Valencia, Alfonso

    2008-01-01

    Background A fundamental problem when trying to define the functional relationships between proteins is the difficulty in quantifying functional similarities, even when well-structured ontologies exist regarding the activity of proteins (i.e. 'gene ontology' -GO-). However, functional metrics can overcome the problems in the comparing and evaluating functional assignments and predictions. As a reference of proximity, previous approaches to compare GO terms considered linkage in terms of ontology weighted by a probability distribution that balances the non-uniform 'richness' of different parts of the Direct Acyclic Graph. Here, we have followed a different approach to quantify functional similarities between GO terms. Results We propose a new method to derive 'functional distances' between GO terms that is based on the simultaneous occurrence of terms in the same set of Interpro entries, instead of relying on the structure of the GO. The coincidence of GO terms reveals natural biological links between the GO functions and defines a distance model Df which fulfils the properties of a Metric Space. The distances obtained in this way can be represented as a hierarchical 'Functional Tree'. Conclusion The method proposed provides a new definition of distance that enables the similarity between GO terms to be quantified. Additionally, the 'Functional Tree' defines groups with biological meaning enhancing its utility for protein function comparison and prediction. Finally, this approach could be for function-based protein searches in databases, and for analysing the gene clusters produced by DNA array experiments. PMID:18221506

  2. Annotating the human genome with Disease Ontology

    PubMed Central

    Osborne, John D; Flatow, Jared; Holko, Michelle; Lin, Simon M; Kibbe, Warren A; Zhu, Lihua (Julie); Danila, Maria I; Feng, Gang; Chisholm, Rex L

    2009-01-01

    Background The human genome has been extensively annotated with Gene Ontology for biological functions, but minimally computationally annotated for diseases. Results We used the Unified Medical Language System (UMLS) MetaMap Transfer tool (MMTx) to discover gene-disease relationships from the GeneRIF database. We utilized a comprehensive subset of UMLS, which is disease-focused and structured as a directed acyclic graph (the Disease Ontology), to filter and interpret results from MMTx. The results were validated against the Homayouni gene collection using recall and precision measurements. We compared our results with the widely used Online Mendelian Inheritance in Man (OMIM) annotations. Conclusion The validation data set suggests a 91% recall rate and 97% precision rate of disease annotation using GeneRIF, in contrast with a 22% recall and 98% precision using OMIM. Our thesaurus-based approach allows for comparisons to be made between disease containing databases and allows for increased accuracy in disease identification through synonym matching. The much higher recall rate of our approach demonstrates that annotating human genome with Disease Ontology and GeneRIF for diseases dramatically increases the coverage of the disease annotation of human genome. PMID:19594883

  3. Automated database mediation using ontological metadata mappings.

    PubMed

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies.

  4. Using Ontology Network Structure in Text Mining

    PubMed Central

    Berndt, Donald J.; McCart, James A.; Luther, Stephen L.

    2010-01-01

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy. PMID:21346937

  5. Ontological knowledge structure of intuitive biology

    NASA Astrophysics Data System (ADS)

    Martin, Suzanne Michele

    It has become increasingly important for individuals to understand infections disease, as there has been a tremendous rise in viral and bacterial disease. This research examines systematic misconceptions regarding the characteristics of viruses and bacteria present in individuals previously educated in biological sciences at a college level. 90 pre-nursing students were administered the Knowledge Acquisition Device (KAD) which consists of 100 True/False items that included statements about the possible attributes of four entities: bacteria, virus, amoeba, and protein. Thirty pre-nursing students, who incorrectly stated that viruses were alive, were randomly assigned to three conditions. (1) exposed to information about the ontological nature of viruses, (2) Information about viruses, (3) control. In the condition that addressed the ontological nature of a virus, all of those participants were able to classify viruses correctly as not alive; however any items that required inferences, such as viruses come in male and female forms or viruses breed with each other to make baby viruses were still incorrectly answered by all conditions in the posttest. It appears that functional knowledge, ex. If a virus is alive or dead, or how it is structured, is not enough for an individual to have a full and accurate understanding of viruses. Ontological knowledge information may alter the functional knowledge but underlying inferences remain systematically incorrect.

  6. The Human Phenotype Ontology in 2017

    PubMed Central

    Köhler, Sebastian; Vasilevsky, Nicole A.; Engelstad, Mark; Foster, Erin; McMurry, Julie; Aymé, Ségolène; Baynam, Gareth; Bello, Susan M.; Boerkoel, Cornelius F.; Boycott, Kym M.; Brudno, Michael; Buske, Orion J.; Chinnery, Patrick F.; Cipriani, Valentina; Connell, Laureen E.; Dawkins, Hugh J.S.; DeMare, Laura E.; Devereau, Andrew D.; de Vries, Bert B.A.; Firth, Helen V.; Freson, Kathleen; Greene, Daniel; Hamosh, Ada; Helbig, Ingo; Hum, Courtney; Jähn, Johanna A.; James, Roger; Krause, Roland; F. Laulederkind, Stanley J.; Lochmüller, Hanns; Lyon, Gholson J.; Ogishima, Soichi; Olry, Annie; Ouwehand, Willem H.; Pontikos, Nikolas; Rath, Ana; Schaefer, Franz; Scott, Richard H.; Segal, Michael; Sergouniotis, Panagiotis I.; Sever, Richard; Smith, Cynthia L.; Straub, Volker; Thompson, Rachel; Turner, Catherine; Turro, Ernest; Veltman, Marijcke W.M.; Vulliamy, Tom; Yu, Jing; von Ziegenweidt, Julie; Zankl, Andreas; Züchner, Stephan; Zemojtel, Tomasz; Jacobsen, Julius O.B.; Groza, Tudor; Smedley, Damian; Mungall, Christopher J.; Haendel, Melissa; Robinson, Peter N.

    2017-01-01

    Deep phenotyping has been defined as the precise and comprehensive analysis of phenotypic abnormalities in which the individual components of the phenotype are observed and described. The three components of the Human Phenotype Ontology (HPO; www.human-phenotype-ontology.org) project are the phenotype vocabulary, disease-phenotype annotations and the algorithms that operate on these. These components are being used for computational deep phenotyping and precision medicine as well as integration of clinical data into translational research. The HPO is being increasingly adopted as a standard for phenotypic abnormalities by diverse groups such as international rare disease organizations, registries, clinical labs, biomedical resources, and clinical software tools and will thereby contribute toward nascent efforts at global data exchange for identifying disease etiologies. This update article reviews the progress of the HPO project since the debut Nucleic Acids Research database article in 2014, including specific areas of expansion such as common (complex) disease, new algorithms for phenotype driven genomic discovery and diagnostics, integration of cross-species mapping efforts with the Mammalian Phenotype Ontology, an improved quality control pipeline, and the addition of patient-friendly terminology. PMID:27899602

  7. Ontological System for Context Artifacts and Resources

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chung, N. T.; Mukherjee, R. M.

    2012-12-01

    The Adaptive Vehicle Make (AVM) program is a portfolio of programs, managed by the Defense Advanced Research Projects Agency (DARPA). It was established to revolutionize how DoD designs, verifies, and manufactures complex defense systems and vehicles. The Component, Context, and Manufacturing Model Library (C2M2L; pronounced "camel") seeks to develop domain-specific models needed to enable design, verification, and fabrication of the Fast Adaptable Next-Generation (FANG) infantry fighting vehicle using in its overall infrastructure. Terrain models are being developed to represent the surface/fluid that an amphibious infantry fighting vehicle would traverse, ranging from paved road surfaces to rocky, mountainous terrain, slope, discrete obstacles, mud, sand snow, and water fording. Context models are being developed to provide additional data for environmental factors, such as: humidity, wind speed, particulate presence and character, solar radiation, cloud cover, precipitation, and more. The Ontological System for Context Artifacts and Resources (OSCAR) designed and developed at the Jet Propulsion Laboratory is semantic web data system that enables context artifacts to be registered and searched according to their meaning, rather than indexed according to their syntactic structure alone (as in the case for traditional search engines). The system leverages heavily on the Semantic Web for Earth and Environmental Terminology (SWEET) ontologies to model physical terrain environment and context model characteristics. In this talk, we focus on the application of the SWEET ontologies and the design of the OSCAR system architecture.

  8. Cyber Forensics Ontology for Cyber Criminal Investigation

    NASA Astrophysics Data System (ADS)

    Park, Heum; Cho, Sunho; Kwon, Hyuk-Chul

    We developed Cyber Forensics Ontology for the criminal investigation in cyber space. Cyber crime is classified into cyber terror and general cyber crime, and those two classes are connected with each other. The investigation of cyber terror requires high technology, system environment and experts, and general cyber crime is connected with general crime by evidence from digital data and cyber space. Accordingly, it is difficult to determine relational crime types and collect evidence. Therefore, we considered the classifications of cyber crime, the collection of evidence in cyber space and the application of laws to cyber crime. In order to efficiently investigate cyber crime, it is necessary to integrate those concepts for each cyber crime-case. Thus, we constructed a cyber forensics domain ontology for criminal investigation in cyber space, according to the categories of cyber crime, laws, evidence and information of criminals. This ontology can be used in the process of investigating of cyber crime-cases, and for data mining of cyber crime; classification, clustering, association and detection of crime types, crime cases, evidences and criminals.

  9. The Human Phenotype Ontology in 2017.

    PubMed

    Köhler, Sebastian; Vasilevsky, Nicole A; Engelstad, Mark; Foster, Erin; McMurry, Julie; Aymé, Ségolène; Baynam, Gareth; Bello, Susan M; Boerkoel, Cornelius F; Boycott, Kym M; Brudno, Michael; Buske, Orion J; Chinnery, Patrick F; Cipriani, Valentina; Connell, Laureen E; Dawkins, Hugh J S; DeMare, Laura E; Devereau, Andrew D; de Vries, Bert B A; Firth, Helen V; Freson, Kathleen; Greene, Daniel; Hamosh, Ada; Helbig, Ingo; Hum, Courtney; Jähn, Johanna A; James, Roger; Krause, Roland; F Laulederkind, Stanley J; Lochmüller, Hanns; Lyon, Gholson J; Ogishima, Soichi; Olry, Annie; Ouwehand, Willem H; Pontikos, Nikolas; Rath, Ana; Schaefer, Franz; Scott, Richard H; Segal, Michael; Sergouniotis, Panagiotis I; Sever, Richard; Smith, Cynthia L; Straub, Volker; Thompson, Rachel; Turner, Catherine; Turro, Ernest; Veltman, Marijcke W M; Vulliamy, Tom; Yu, Jing; von Ziegenweidt, Julie; Zankl, Andreas; Züchner, Stephan; Zemojtel, Tomasz; Jacobsen, Julius O B; Groza, Tudor; Smedley, Damian; Mungall, Christopher J; Haendel, Melissa; Robinson, Peter N

    2017-01-04

    Deep phenotyping has been defined as the precise and comprehensive analysis of phenotypic abnormalities in which the individual components of the phenotype are observed and described. The three components of the Human Phenotype Ontology (HPO; www.human-phenotype-ontology.org) project are the phenotype vocabulary, disease-phenotype annotations and the algorithms that operate on these. These components are being used for computational deep phenotyping and precision medicine as well as integration of clinical data into translational research. The HPO is being increasingly adopted as a standard for phenotypic abnormalities by diverse groups such as international rare disease organizations, registries, clinical labs, biomedical resources, and clinical software tools and will thereby contribute toward nascent efforts at global data exchange for identifying disease etiologies. This update article reviews the progress of the HPO project since the debut Nucleic Acids Research database article in 2014, including specific areas of expansion such as common (complex) disease, new algorithms for phenotype driven genomic discovery and diagnostics, integration of cross-species mapping efforts with the Mammalian Phenotype Ontology, an improved quality control pipeline, and the addition of patient-friendly terminology.

  10. THE COMPOSITIONAL STRUCTURE OF GENE ONTOLOGY TERMS

    PubMed Central

    OGREN, P. V.; COHEN, K. B.; ACQUAAH-MENSAH, G. K.; EBERLEIN, J.; HUNTER, L.

    2008-01-01

    An analysis of the term names in the Gene Ontology reveals the prevalence of substring relations between terms: 65.3% of all GO terms contain another GO term as a proper substring. This substring relation often coincides with a derivational relationship between the terms. For example, the term regulation of cell proliferation (GO:0042127) is derived from the term cell proliferation (GO:0008283) by addition of the phrase regulation of. Further, we note that particular substrings which are not themselves GO terms (e.g. regulation of in the preceding example) recur frequently and in consistent subtrees of the ontology, and that these frequently occurring substrings often indicate interesting semantic relationships between the related terms. We describe the extent of these phenomena—substring relations between terms, and the recurrence of derivational phrases such as regulation of—and propose that these phenomena can be exploited in various ways to make the information in GO more computationally accessible, to construct a conceptually richer representation of the data encoded in the ontology, and to assist in the analysis of natural language texts. PMID:14992505

  11. Ontology-Based Search of Genomic Metadata.

    PubMed

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  12. Development and Evaluation of an Adolescents' Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2016-01-01

    This study aims to develop and evaluate an ontology for adolescents' depression to be used for collecting and analyzing social data. The ontology was developed according to the 'ontology development 101' methodology. Concepts were extracted from clinical practice guidelines and related literatures. The ontology is composed of five sub-ontologies which represent risk factors, sign and symptoms, measurement, diagnostic result and management care. The ontology was evaluated in four different ways: First, we examined the frequency of ontology concept appeared in social data; Second, the content coverage of ontology was evaluated by comparing ontology concepts with concepts extracted from the youth depression counseling records; Third, the structural and representational layer of the ontology were evaluated by 5 ontology and psychiatric nursing experts; Fourth, the scope of the ontology was examined by answering 59 competency questions. The ontology was improved by adding new concepts and synonyms and revising the level of structure.

  13. Health care ontologies: knowledge models for record sharing and decision support.

    PubMed

    Madsen, Maria

    2010-01-01

    This chapter gives an educational overview of: * The difference between informal and formal ontologies * The primary objectives of ontology design, re-use, extensibility, and interoperability * How formal ontologies can be used to map terminologies and classification systems * How formal ontologies improve semantic interoperability * The relationship between a well-formed ontology and the development of intelligent decision support.

  14. Detecting Inconsistencies in the Gene Ontology Using Ontology Databases with Not-gadgets

    NASA Astrophysics Data System (ADS)

    Lependu, Paea; Dou, Dejing; Howe, Doug

    We present ontology databases with not-gadgets, a method for detecting inconsistencies in an ontology with large numbers of annotated instances by using triggers and exclusion dependencies in a unique way. What makes this work relevant is the use of the database itself, rather than an external reasoner, to detect logical inconsistencies given large numbers of annotated instances. What distinguishes this work is the use of event-driven triggers together with the introduction of explicit negations. We applied this approach toward the serotonin example, an open problem in biomedical informatics which aims to use annotations to help identify inconsistencies in the Gene Ontology. We discovered 75 inconsistencies that have important implications in biology, which include: (1) methods for refining transfer rules used for inferring electronic annotations, and (2) highlighting possible biological differences across species worth investigating.

  15. Overlapping ontologies and Indigenous knowledge. From integration to ontological self-determination.

    PubMed

    Ludwig, David

    2016-10-01

    Current controversies about knowledge integration reflect conflicting ideas of what it means to "take Indigenous knowledge seriously". While there is increased interest in integrating Indigenous and Western scientific knowledge in various disciplines such as anthropology and ethnobiology, integration projects are often accused of recognizing Indigenous knowledge only insofar as it is useful for Western scientists. The aim of this article is to use tools from philosophy of science to develop a model of both successful integration and integration failures. On the one hand, I argue that cross-cultural recognition of property clusters leads to an ontological overlap that makes knowledge integration often epistemically productive and socially useful. On the other hand, I argue that knowledge integration is limited by ontological divergence. Adequate models of Indigenous knowledge will therefore have to take integration failures seriously and I argue that integration efforts need to be complemented by a political notion of ontological self-determination.

  16. Development of National Map ontologies for organization and orchestration of hydrologic observations

    NASA Astrophysics Data System (ADS)

    Lieberman, J. E.

    2014-12-01

    usefulness of the developed ontology components includes both solicitation of feedback on prototype applications, and provision of a query / mediation service for feature-linked data to facilitate development of additional third-party applications.

  17. Theory and ontology for sharing temporal knowledge

    NASA Technical Reports Server (NTRS)

    Loganantharaj, Rasiah

    1996-01-01

    Using current technology, the sharing or re-using of knowledge-bases is very difficult, if not impossible. ARPA has correctly recognized the problem and funded a knowledge sharing initiative. One of the outcomes of this project is a formal language called Knowledge Interchange Format (KIF) for representing knowledge that could be translated into other languages. Capturing and representing design knowledge and reasoning with them have become very important for NASA who is a pioneer of innovative design of unique products. For upgrading an existing design for changing technology, needs, or requirements, it is essential to understand the design rationale, design choices, options and other relevant information associated with the design. Capturing such information and presenting them in the appropriate form are part of the ongoing Design Knowledge Capture project of NASA. The behavior of an object and various other aspects related to time are captured by the appropriate temporal knowledge. The captured design knowledge will be represented in such a way that various groups of NASA who are interested in various aspects of the design cycle should be able to access and use the design knowledge effectively. To facilitate knowledge sharing among these groups, one has to develop a very well defined ontology. Ontology is a specification of conceptualization. In the literature several specific domains were studied and some well defined ontologies were developed for such domains. However, very little, or no work has been done in the area of representing temporal knowledge to facilitate sharing. During the ASEE summer program, I have investigated several temporal models and have proposed a theory for time that is flexible to accommodate the time elements, such as, points and intervals, and is capable of handling the qualitative and quantitative temporal constraints. I have also proposed a primitive temporal ontology using which other relevant temporal ontologies can be built. I

  18. Enabling Ontology Based Semantic Queries in Biomedical Database Systems

    PubMed Central

    Zheng, Shuai; Wang, Fusheng; Lu, James; Saltz, Joel

    2013-01-01

    While current biomedical ontology repositories offer primitive query capabilities, it is difficult or cumbersome to support ontology based semantic queries directly in semantically annotated biomedical databases. The problem may be largely attributed to the mismatch between the models of the ontologies and the databases, and the mismatch between the query interfaces of the two systems. To fully realize semantic query capabilities based on ontologies, we develop a system DBOntoLink to provide unified semantic query interfaces by extending database query languages. With DBOntoLink, semantic queries can be directly and naturally specified as extended functions of the database query languages without any programming needed. DBOntoLink is adaptable to different ontologies through customizations and supports major biomedical ontologies hosted at the NCBO BioPortal. We demonstrate the use of DBOntoLink in a real world biomedical database with semantically annotated medical image annotations. PMID:23404054

  19. Natural Language Processing Methods and Systems for Biomedical Ontology Learning

    PubMed Central

    Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.

    2010-01-01

    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

  20. Bio-ontologies: current trends and future directions

    PubMed Central

    Bodenreider, Olivier; Stevens, Robert

    2006-01-01

    In recent years, as a knowledge-based discipline, bioinformatics has been made more computationally amenable. After its beginnings as a technology advocated by computer scientists to overcome problems of heterogeneity, ontology has been taken up by biologists themselves as a means to consistently annotate features from genotype to phenotype. In medical informatics, artifacts called ontologies have been used for a longer period of time to produce controlled lexicons for coding schemes. In this article, we review the current position in ontologies and how they have become institutionalized within biomedicine. As the field has matured, the much older philosophical aspects of ontology have come into play. With this and the institutionalization of ontology has come greater formality. We review this trend and what benefits it might bring to ontologies and their use within biomedicine. PMID:16899495